0% found this document useful (0 votes)
404 views

Btech-E Div Assignment1

The document contains a student's assignment responses. It includes: 1. A comparison of the A* and AO* algorithms, noting differences in how they find paths. 2. A conceptual dependency graph drawn for the sentence "A dog is greedily eating a bone." 3. Explanations of forward chaining and backward chaining with an example proving "Robert is criminal." 4. An explanation of hill climbing search with an example applying it to the travelling salesman problem.

Uploaded by

akash singh
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
404 views

Btech-E Div Assignment1

The document contains a student's assignment responses. It includes: 1. A comparison of the A* and AO* algorithms, noting differences in how they find paths. 2. A conceptual dependency graph drawn for the sentence "A dog is greedily eating a bone." 3. Explanations of forward chaining and backward chaining with an example proving "Robert is criminal." 4. An explanation of hill climbing search with an example applying it to the travelling salesman problem.

Uploaded by

akash singh
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 28

BTech-E Div

Assignment1-
Roll No: E008 Name: Satnam Singh Pahwa
Class: B. Tech CE Batch: E1
Date of Experiment: 13/08/2020 Date of Submission
Grade

Q1 Compare A* and AO* algorithm.

A* AO*
1. a* is a computer algorithm which 1. In ao* algorithm you follow a
is used in pathfinding and graph similar procedure but there are
traversal. It is used in the process of constraints traversing specific paths. 
plotting an efficiently directed path
between a number of points called
nodes. 
2. In a* algorithm you traverse the 2. When you traverse those paths,
tree in depth and keep moving and cost of all the paths which originate
adding up the total cost of reaching from the preceding node are added till
the cost from the current state to the that level, where you find the goal
goal state and add it to the cost of state regardless of the fact whether
reaching the current state.  they take you to the goal state or not.
3. Used to find just one path. 3. Used to find more than one path.
4. Often known as OR graph 4. Often known as AND-OR graph
algorthm algorithm.
5. Used in practical application 5. Rarely used in practical
applications.
Q2 Draw a conceptual dependency graph for the following sentence:
A dog is greedily eating a bone.
Q3 Explain forward chaining and backward chaining with an example.
Give regular English statements and their predicate equivalent.
Write explicitly the statement you wanted to prove.
Then give forward chaining and backward chaining.
Forward Chaining:

Forward chaining is also known as a forward deduction or forward reasoning


method when using an inference engine. Forward chaining is a form of
reasoning which start with atomic sentences in the knowledge base and applies
inference rules (Modus Ponens) in the forward direction to extract more data
until a goal is reached.

The Forward-chaining algorithm starts from known facts, triggers all rules
whose premises are satisfied, and add their conclusion to the known facts. This
process repeats until the problem is solved.

Properties of Forward-Chaining:

i. It is a down-up approach, as it moves from bottom to top.


ii. It is a process of making a conclusion based on known facts or data, by
starting from the initial state and reaches the goal state.
iii. Forward-chaining approach is also called as data-driven as we reach to
the goal using available data.
iv. Forward -chaining approach is commonly used in the expert system, such
as CLIPS, business, and production rule systems.

Consider the following famous example which we will use in both approaches:

Example:

"As per the law, it is a crime for an American to sell weapons to hostile nations.
Country A, an enemy of America, has some missiles, and all the missiles were
sold to it by Robert, who is an American citizen."

Prove that "Robert is criminal."

Facts Conversion into FOL:

i. It is a crime for an American to sell weapons to hostile nations. (Let's say


p, q, and r are variables)
American (p) ∧ weapon(q) ∧ sells (p, q, r) ∧ hostile(r) →
Criminal(p)...(1)
ii. Country A has some missiles. ?p Owns(A, p) ∧ Missile(p). It can be
written in two definite clauses by using Existential Instantiation,
introducing new Constant T1.
Owns(A, T1) ......(2)
Missile(T1) .......(3)
iii. All of the missiles were sold to country A by Robert.
?p Missiles(p) ∧ Owns (A, p) → Sells (Robert, p, A)......(4)
iv. Missiles are weapons.
Missile(p) → Weapons (p).......(5)
v. Enemy of America is known as hostile.
Enemy(p, America) →Hostile(p)........(6)
vi. Country A is an enemy of America.
Enemy (A, America).........(7)
vii. Robert is American
American(Robert)..........(8)

Forward chaining proof:

Step-1:

In the first step we will start with the known facts and will choose the sentences
which do not have implications, such as: American(Robert), Enemy(A,
America), Owns(A, T1), and Missile(T1). All these facts will be represented
as below.

Step-2:

At the second step, we will see those facts which infer from available facts and
with satisfied premises.

Rule-(1) does not satisfy premises, so it will not be added in the first iteration.

Rule-(2) and (3) are already added.


Rule-(4) satisfy with the substitution {p/T1}, so Sells (Robert, T1, A) is added,
which infers from the conjunction of Rule (2) and (3).

Rule-(6) is satisfied with the substitution(p/A), so Hostile(A) is added and


which infers from Rule-(7).

Step-3:

At step-3, as we can check Rule-(1) is satisfied with the substitution {p/Robert,


q/T1, r/A}, so we can add Criminal(Robert) which infers all the available
facts. And hence we reached our goal statement.

Hence it is proved that “Robert is Criminal” using Forward Chaining approach.

Backward Chaining:

Backward-chaining is also known as a backward deduction or backward


reasoning method when using an inference engine. A backward chaining
algorithm is a form of reasoning, which starts with the goal and works
backward, chaining through rules to find known facts that support the goal.
Properties of backward chaining:

i. It is known as a top-down approach.


ii. Backward-chaining is based on modus ponens inference rule.
iii. In backward chaining, the goal is broken into sub-goal or sub-goals to
prove the facts true.
iv. It is called a goal-driven approach, as a list of goals decides which rules
are selected and used.
v. Backward -chaining algorithm is used in game theory, automated theorem
proving tools, inference engines, proof assistants, and various AI
applications.
vi. The backward-chaining method mostly used a depth-first search strategy
for proof.

Example:

"As per the law, it is a crime for an American to sell weapons to hostile nations.
Country A, an enemy of America, has some missiles, and all the missiles were
sold to it by Robert, who is an American citizen."

Prove that "Robert is criminal."

Facts Conversion into FOL:

i. American (p) ∧ weapon(q) ∧ sells (p, q, r) ∧ hostile(r) →


Criminal(p) ...(1)
Owns(A, T1)                 ........(2)
ii. Missile(T1)
iii. ?p Missiles(p) ∧ Owns (A, p) → Sells (Robert, p, A)           ......(4)
iv. Missile(p) → Weapons (p)                 .......(5)
v. Enemy(p, America) →Hostile(p)                 ........(6)
vi. Enemy (A, America)                 .........(7)
vii. American(Robert).                 ..........(8)

Backward-Chaining proof:

Step-1:

At the first step, we will take the goal fact. And from the goal fact, we will infer
other facts, and at last, we will prove those facts true. So our goal fact is "Robert
is Criminal," so following is the predicate of it.
Step-2:

At the second step, we will infer other facts form goal fact which satisfies the
rules. So as we can see in Rule-1, the goal predicate Criminal (Robert) is
present with substitution {Robert/P}. So we will add all the conjunctive facts
below the first level and will replace p with Robert.

Here we can see American (Robert) is a fact, so it is proved here.

Step-3:

At step-3, we will extract further fact Missile(q) which infer from Weapon(q),
as it satisfies Rule-(5). Weapon (q) is also true with the substitution of a
constant T1 at q.
Step-4:

At step-4, we can infer facts Missile(T1) and Owns(A, T1) form Sells(Robert,
T1, r) which satisfies the Rule- 4, with the substitution of A in place of r. So
these two statements are proved here.

Step-5:

At step-5, we can infer the fact Enemy(A, America) from Hostile(A) which


satisfies Rule- 6. And hence all the statements are proved true using backward
chaining.
Hence it is proved that “Robert is Criminal” using Backward Chaining
approach.
Q4 Explain hill climbing search with an example.
 Hill Climbing is a technique to solve certain optimization problems. In
this technique, we start with a sub-optimal solution and the solution is
improved repeatedly until some condition is maximized.
 The idea of starting with a sub-optimal solution is compared to starting
from the base of the hill, improving the solution is compared to walking
up the hill, and finally maximizing some condition is compared to
reaching the top of the hill.
 Hence, the hill climbing technique can be considered as the following
phases −
 Constructing a sub-optimal solution obeying the constraints of the
problem
 Improving the solution step-by-step
 Improving the solution until no more improvement is possible
Hill Climbing technique is mainly used for solving computationally hard
problems. It looks only at the current state and immediate future state. Hence,
this technique is memory efficient as it does not maintain a search tree.
Algorithm: Hill Climbing
Evaluate the initial state.
Loop until a solution is found or there are no new operators left to be applied:
- Select and apply a new operator
- Evaluate the new state:
goal -→ quit
better than current state -→ new current stat
Applications of Hill Climbing Technique
Hill Climbing technique can be used to solve many problems, where the current
state allows for an accurate evaluation function, such as Network-Flow,
Travelling Salesman problem, 8-Queens problem, Integrated Circuit design, etc.
Hill Climbing is used in inductive learning methods too. This technique is used
in robotics for coordination among multiple robots in a team. There are many
other problems where this technique is used.
Example
This technique can be applied to solve the travelling salesman problem. First an
initial solution is determined that visits all the cities exactly once. Hence, this
initial solution is not optimal in most of the cases. Even this solution can be
very poor. The Hill Climbing algorithm starts with such an initial solution and
makes improvements to it in an iterative way. Eventually, a much shorter route
is likely to be obtained.
Q5 Compare and contrast BFS and DFS.
BFS DFS
i. BFS Stands for “Breadth First i. DFS stands for “Depth First
Search”. Search”.
ii. BFS starts traversal from the root ii. DFS starts the traversal from the
node and then explore the search root node and explore the search
in the level by level manner i.e. as far as possible from the root
as close as possible from the root node i.e. depth wise.
node.
iii. Breadth First Search can be done iii. Depth First Search can be done
with the help of queue i.e. FIFO with the help of Stack i.e. LIFO
implementation. implementations.
iv. This algorithm works in single iv. This algorithm works in two
stage. The visited vertices are stages – in the first stage the
removed from the queue and then visited vertices are pushed onto
displayed at once. the stack and later on when there
is no vertex further to visit those
are popped-off.
v. BFS is slower than DFS. v. DFS is faster than BFS.
vi. BFS requires more memory vi. DFS require less memory
compare to DFS. compare to BFS.
vii. Applications of BFS > To find vii. Applications of DFS > Useful in
Shortest path > Single Source & Cycle detection > In Connectivity
All pairs shortest paths > In testing > Finding a path between
Spanning tree > In Connectivity. V and W in the graph. > useful in
finding spanning trees & forest.
viii. BFS is useful in finding shortest viii. DFS in not so useful in finding
path.BFS can be used to find the shortest path. It is used to
shortest distance between some perform a traversal of a general
starting node and the remaining graph and the idea of DFS is to
nodes of the graph. make a path as long as possible,
and then go back (backtrack) to
add branches also as long as
possible.
ix. ix.
Q6 Explain iterative deepening search with an example.
Iterative Deepening Search:
Iterative deepening depth-first search (IDDFS) is an algorithm that is an
important part of an Uninformed search strategy just like BFS and DFS. We can
define IDDFS as an algorithm of an amalgam of BFS and DFS searching
techniques. In IDDFS, We have found certain limitations in BFS and DFS so
we have done hybridization of both the procedures for eliminating the demerits
lying in them individually. We do a limited depth-first search up to a fixed
“limited depth”. Then we keep on incrementing the depth limit by iterating the
procedure unless we have found the goal node or have traversed the whole tree
whichever is earlier.
Working Algorithm of IDDFS
In the uninformed searching strategy, the BFS and DFS have not been so ideal
in searching the element in optimum time and space. The algorithms only
guarantee that the path will be found in exponential time and space. So we
found a method where we can use the amalgamation of space competence of
DFS and optimum solution approach of BFS methods, and there we develop a
new method called iterative deepening using the two of them. The main idea
here lies in utilizing the re-computation of entities of the boundary instead of
stocking them up. Every re-computation is made up of DFS and thus it uses less
space. Now let us also consider using BFS in iterative deepening search.
Consider making a breadth-first search into an iterative deepening search.
We can do this by having aside a DFS which will search up to a limit. It first
searches till a pre-defined limit depth to depth and then generates a route
length1.
This is done by creating routes of length 1 in the DFS way. Next, it makes way
for routes of depth limit 2, 3 and onwards.
It even can delete all the preceding calculation all-time at the beginning of the
loop and iterate. Hence at some depth eventually the solution will be found if
there is any in the tree because the enumeration takes place in order.
In order to implement the iterative deepening search we have to mark
differences among:
Breakdown as the depth limit bound was attained.
A breakdown where depth bound was not attained.
While in the case once we try the search method multiple times by increasing
the depth limit each time and in the second case even if we keep on searching
multiple times since no solution exists then it means simply the waste of time.
Thus, we come to the conclusion that in the first case failure is found to be
failing unnaturally, and in the second case, the failure is failing naturally.
Example:

Here in the given tree, the starting node is A and the depth initialized to 0. The
goal node is R where we have to find the depth and the path to reach it. The
depth from the figure is 4. In this example, we consider the tree as a finite tree,
while we can consider the same procedure for the infinite tree as well. We knew
that in the algorithm of IDDFS we first do DFS till a specified depth and then
increase the depth at each loop. This special step forms the part of DLS or
Depth Limited Search. Thus, the following traversal shows the IDDFS search.

Iterative deepening depth-first search is a hybrid algorithm emerging out of BFS


and DFS. IDDFS might not be used directly in many applications of Computer
Science, yet the strategy is used in searching data of infinite space by
incrementing the depth limit by progressing iteratively. This is quite useful and
has applications in AI and the emerging data sciences industry.
Advantages
 IDDFS gives us the hope to find the solution if it exists in the tree.
 Space and time complexities are expressed as: O(d) and here d is defined
as goal depth.
Disadvantages
 The time taken is exponential to reach the goal node.
 The main problem with IDDFS is the time and wasted calculations that
take place at each depth.

Assignment 2:
Q7 Describe the components of an expert system using medical diagnosis
system as an example.
 An expert system is made up of three parts:
o A user interface - This is the system that allows a non-expert user
to query (question) the expert system, and to receive advice. The
user-interface is designed to be a simple to use as possible.
o A knowledge base - This is a collection of facts and rules. The
knowledge base is created from information provided by human
experts
o An inference engine - This acts rather like a search engine,
examining the knowledge base for information that matches the
user's query
 The non-expert user queries the expert system. This is done by asking a
question, or by answering questions asked by the expert system.
 The inference engine uses the query to search the knowledge base and
then provides an answer or some advice to the user.
 In Medical diagnosis system, the knowledge base would contain medical
information, the symptoms of the patient would be used as the query, and
the advice would be a diagnose of the patient’s illness.
Q8 Explain alpha beta pruning algorithm. Why it is suitable for 2 player game
Alpha-Beta Pruning Algorithm:
 Alpha-beta pruning is a modified version of the minimax algorithm. It is
an optimization technique for the minimax algorithm.
 As we have seen in the minimax search algorithm that the number of
game states it has to examine are exponential in depth of the tree. Since
we cannot eliminate the exponent, but we can cut it to half. Hence there is
a technique by which without checking each node of the game tree we
can compute the correct minimax decision, and this technique is called
pruning. This involves two threshold parameter Alpha and beta for future
expansion, so it is called alpha-beta pruning. It is also called as Alpha-
Beta Algorithm.
 Alpha-beta pruning can be applied at any depth of a tree, and sometimes
it not only prune the tree leaves but also entire sub-tree.
 The two-parameter can be defined as:
1.Alpha: The best (highest-value) choice we have found so far at any point
along the path of Maximizer. The initial value of alpha is -∞.
2.Beta: The best (lowest-value) choice we have found so far at any point
along the path of Minimizer. The initial value of beta is +∞.
 The Alpha-beta pruning to a standard minimax algorithm returns the
same move as the standard algorithm does, but it removes all the nodes
which are not really affecting the final decision but making algorithm
slow. Hence by pruning these nodes, it makes the algorithm fast.

Advantages for two player game


It is an adversarial search algorithm used commonly for machine playing of
two-player games (Tic-tac-toe, Chess, Go, etc.). It stops evaluating a move
when at least one possibility has been found that proves the move to be worse
than a previously examined move. Such moves need not be evaluated further.
When applied to a standard minimax tree, it returns the same move as minimax
would, but prunes away branches that cannot possibly influence the final
decision.
Q9 Elaborate on Difference between informed and Uniformed search.

INFORMED SEARCH UNINFORMED SEARCH

i. It uses knowledge for the i. It doesn’t use knowledge for


searching process. searching process.

ii. It finds solution more ii. It finds solution slow as


quickly. compared to informed search.

iii. It is highly efficient. iii. It is mandatory efficient.

iv. Cost is low. iv. Cost is high.

v. It consumes less time. v. It consumes moderate time.

vi. It provides the direction vi. No suggestion is given


regarding the solution. regarding the solution in it.

vii. It is less lengthy while vii. It is lengthier while


Implementation. implementation.

viii. Greedy Search, A* Search, viii. Depth First Search, Breadth


Graph Search First Search
Q10 What are different types of agents. Explain any three properly.

Types of Agents

Agents can be grouped into four classes based on their degree of perceived
intelligence and capability :
 Simple Reflex Agents
 Model-Based Reflex Agents
 Goal-Based Agents
 Utility-Based Agents
 Learning Agent

Simple reflex agents

Simple reflex agents ignore the rest of the percept history and act only on the
basis of the current percept. Percept history is the history of all that an agent
has perceived till date. The agent function is based on the condition-action
rule. A condition-action rule is a rule that maps a state i.e, condition to an
action. If the condition is true, then the action is taken, else not. This agent
function only succeeds when the environment is fully observable. For simple
reflex agents operating in partially observable environments, infinite loops are
often unavoidable. It may be possible to escape from infinite loops if the agent
can randomize its actions. Problems with Simple reflex agents are :
 Very limited intelligence.
 No knowledge of non-perceptual parts of state.
 Usually too big to generate and store.
 If there occurs any change in the environment, then the collection of
rules need to be updated.

Model-based reflex agents

It works by finding a rule whose condition matches the current situation. A


model-based agent can handle partially observable environments by use of
model about the world. The agent has to keep track of internal state which is
adjusted by each percept and that depends on the percept history. The current
state is stored inside the agent which maintains some kind of structure
describing the part of the world which cannot be seen. Updating the state
requires information about :
 how the world evolves in-dependently from the agent, and
 how the agent actions affects the world.
Goal-based agents

These kind of agents take decision based on how far they are currently from
their goal(description of desirable situations). Their every action is intended to
reduce its distance from the goal. This allows the agent a way to choose among
multiple possibilities, selecting the one which reaches a goal state. The
knowledge that supports its decisions is represented explicitly and can be
modified, which makes these agents more flexible. They usually require search
and planning. The goal-based agent’s behavior can easily be changed.

Q11 Explain supervised, Unsupervised and Reinforcement learning with


suitable example.
Supervised learning
Supervised learning as the name indicates the presence of a supervisor as a
teacher. Basically, supervised learning is a learning in which we teach or train
the machine using data which is well-labelled that means some data is already
tagged with the correct answer. After that, the machine is provided with a new
set of examples(data) so that supervised learning algorithm analyses the training
data (set of training examples) and produces a correct outcome from labelled
data
Supervised learning classified into two categories of algorithms:
Classification: A classification problem is when the output variable is a
category, such as “Red” or “blue” or “disease” and “no disease”.
Regression: A regression problem is when the output variable is a real value,
such as “dollars” or “weight”.
Supervised learning deals with or learns with “labelled” data which implies that
some data is already tagged with the correct answer.
Advantages:
 Supervised learning allows collecting data and produce data output from
the previous experiences.
 Helps to optimize performance criteria with the help of experience.
 Supervised machine learning helps to solve various types of real-world
computation problems.
Disadvantages:
 Classifying big data can be challenging.
 Training for supervised learning needs a lot of computation time so, it
requires a lot of time.
Example:
Is it a cat or a dog?
Image classification is a popular problem in the computer vision field. Here, the
goal is to predict what class an image belongs to. In this set of problems, we are
interested in finding the class label of an image. More precisely: is the image of
a car or a plane? A cat or a dog?
Unsupervised Learning
Unsupervised learning is the training of machine using information that is
neither classified nor labelled and allowing the algorithm to act on that
information without guidance. Here the task of machine is to group unsorted
information according to similarities, patterns and differences without any prior
training of data.
Unlike supervised learning, no teacher is provided that means no training will
be given to the machine. Therefore, machine is restricted to find the hidden
structure in unlabelled data by our-self.
Example:
For instance, suppose it is given an image having both dogs and cats which have
not seen ever.
Thus, the machine has no idea about the features of dogs and cat so we can’t
categorize it in dogs and cats. But it can categorize them according to their
similarities, patterns, and differences i.e., we can easily categorize the above
picture into two parts. First may contain all pics having dogs in it and second
part may contain all pics having cats in it. Here you didn’t learn anything
before, means no training data or examples.
It allows the model to work on its own to discover patterns and information that
was previously undetected. It mainly deals with unlabelled data.
Unsupervised learning classified into two categories of algorithms:
Clustering: A clustering problem is where you want to discover the inherent
groupings in the data, such as grouping customers by purchasing behavior.
Association: An association rule learning problem is where you want to
discover rules that describe large portions of your data, such as people that buy
X also tend to buy Y.
Reinforcement learning
Reinforcement learning is an area of Machine Learning. It is about taking
suitable action to maximize reward in a particular situation. It is employed by
various software and machines to find the best possible behaviour or path it
should take in a specific situation. Reinforcement learning differs from the
supervised learning in a way that in supervised learning the training data has the
answer key with it so the model is trained with the correct answer itself whereas
in reinforcement learning, there is no answer but the reinforcement agent
decides what to do to perform the given task. In the absence of a training
dataset, it is bound to learn from its experience.
Examples:
Medicine
Reinforcement learning is ideally suited to figuring out optimal treatments for
health conditions and drug therapies. It has also been used in clinical trials as
well as for other applications in healthcare.
Robotics
Reinforcement learning gives robotics a “framework and a set of tools” for
hard-to-engineer behaviors. Since reinforcement learning can happen without
supervision, this could help robotics grow exponentially.
Q12 What is natural language processing? Explain any two applications of
NLP.
 Natural Language Processing (NLP) refers to AI method of
communicating with an intelligent systems using a natural language such
as English.
 Processing of Natural Language is required when you want an intelligent
system like robot to perform as per your instructions, when you want to
hear decision from a dialogue based clinical expert system, etc.
 The field of NLP involves making computers to perform useful tasks with
the natural languages humans use. The input and output of an NLP
system can be −
 Speech
 Written Text
Components of NLP
There are two components of NLP as given −
Natural Language Understanding (NLU)
Understanding involves the following tasks −
 Mapping the given input in natural language into useful
representations.
 Analyzing different aspects of the language.
Natural Language Generation (NLG)
It is the process of producing meaningful phrases and sentences in the
form of natural language from some internal representation.
It involves −
 Text planning − It includes retrieving the relevant content from
knowledge base.
 Sentence planning − It includes choosing required words, forming
meaningful phrases, setting tone of the sentence.
 Text Realization − It is mapping sentence plan into sentence
structure.
The NLU is harder than NLG.
Applications of NLP
1. Sentiment Analysis
Another important application of natural language processing (NLP) is
sentiment analysis. As the name suggests, sentiment analysis is used
to identify the sentiments among several posts. It is also used to
identify the sentiment where the emotions are not expressed explicitly.
Companies are using sentiment analysis, an application of natural
language processing (NLP) to identify the opinion and sentiment of
their customers online. It will help companies to understand what their
customers think about the products and services. Companies can judge
their overall reputation from customer posts with the help of sentiment
analysis. In this way, we can say that beyond determining simple
polarity, sentiment analysis understands sentiments in context to help
us better understand what is behind the expressed opinion.
2. Automatic Summarization
In this digital era, the most valuable thing is data, or you can say
information. However, do we really get useful as well as the required
amount of information? The answer is ‘NO’ because the information
is overloaded and our access to knowledge and information far
exceeds our capacity to understand it. We are in a serious need of
automatic text summarization and information because the flood of
information over internet is not going to stop.
Text summarization may be defined as the technique to create short,
accurate summary of longer text documents. Automatic text
summarization will help us with relevant information in less time.
Natural language processing (NLP) plays an important role in
developing an automatic text summarization.
Q13 what are the characteristics of expert system ? What is the architecture of
an expert system?
Following are the important Characteristics of Expert System in AI:
 The Highest Level of Expertise: The Expert system in AI offers the
highest level of expertise. It provides efficiency, accuracy and
imaginative problem-solving.
 Right on Time Reaction: An Expert System in Artificial Intelligence
interacts in a very reasonable period of time with the user. The total time
must be less than the time taken by an expert to get the most accurate
solution for the same problem.
 Good Reliability: The Expert system in AI needs to be reliable, and it
must not make any a mistake.
 Flexible: It is vital that it remains flexible as it the is possessed by an
Expert system.
 Effective Mechanism: Expert System in Artificial Intelligence must have
an efficient mechanism to administer the compilation of the existing
knowledge in it.
 Capable of handling challenging decision & problems: An expert
system is capable of handling challenging decision problems and
delivering solutions.
Architecture of Expert System:
 The Client Interface processes requests for service from system-users and
from application layer components.
 The Knowledge-Base Editor is a simple editor that enable a subject
matter expert to compose and add rules to the Knowledge-base.
 Rule Translator converts rules from one form to another i.e; their original
form to a machine-readable form.
 The Rule Engine (inference engine) is responsible for executing
Knowledge-Base rules.
 The shell component, Rule Object Classes, is a container for object
classes supporting.
Q14 Explain Ontology with an example.
In AI, an ontology is a specification of the meanings of the symbols in an
information system. That is, it is a specification of a conceptualization. It is a
specification of what individuals and relationships are assumed to exist and
what terminology is used for them. Typically, it specifies what types of
individuals will be modeled, specifies what properties will be used, and gives
some axioms that restrict the use of that vocabulary.
Example: Consider a trading agent that is designed to find accommodations.
Users could use such an agent to describe what accommodation they want. The
trading agent could search multiple knowledge bases to find suitable
accommodations or to notify users when some appropriate accommodation
becomes available. An ontology is required to specify the meaning of the
symbols for the user and to allow the knowledge bases to interoperate. It
provides the semantic glue to tie together the users' needs with the knowledge
bases.
In such a domain, houses and apartment buildings may both be residential
buildings. Although it may be sensible to suggest renting a house or an
apartment in an apartment building, it may not be sensible to suggest renting an
apartment building to someone who does not actually specify that they want to
rent the whole building. A "living unit" could be defined to be the collection of
rooms that some people, who are living together, live in. A living unit may be
what a rental agency offers to rent. At some stage, the designer may have to
decide whether a room for rent in a house is a living unit, or even whether part
of a shared room that is rented separately is a living unit. Often the boundary
cases - cases that may not be initially anticipated - are not clearly delineated but
become better defined as the ontology evolves.
The ontology would not contain descriptions of actual houses or apartments
because the actual available accommodation would change over time and would
not change the meaning of the vocabulary.

You might also like