Btech-E Div Assignment1
Btech-E Div Assignment1
Assignment1-
Roll No: E008 Name: Satnam Singh Pahwa
Class: B. Tech CE Batch: E1
Date of Experiment: 13/08/2020 Date of Submission
Grade
A* AO*
1. a* is a computer algorithm which 1. In ao* algorithm you follow a
is used in pathfinding and graph similar procedure but there are
traversal. It is used in the process of constraints traversing specific paths.
plotting an efficiently directed path
between a number of points called
nodes.
2. In a* algorithm you traverse the 2. When you traverse those paths,
tree in depth and keep moving and cost of all the paths which originate
adding up the total cost of reaching from the preceding node are added till
the cost from the current state to the that level, where you find the goal
goal state and add it to the cost of state regardless of the fact whether
reaching the current state. they take you to the goal state or not.
3. Used to find just one path. 3. Used to find more than one path.
4. Often known as OR graph 4. Often known as AND-OR graph
algorthm algorithm.
5. Used in practical application 5. Rarely used in practical
applications.
Q2 Draw a conceptual dependency graph for the following sentence:
A dog is greedily eating a bone.
Q3 Explain forward chaining and backward chaining with an example.
Give regular English statements and their predicate equivalent.
Write explicitly the statement you wanted to prove.
Then give forward chaining and backward chaining.
Forward Chaining:
The Forward-chaining algorithm starts from known facts, triggers all rules
whose premises are satisfied, and add their conclusion to the known facts. This
process repeats until the problem is solved.
Properties of Forward-Chaining:
Consider the following famous example which we will use in both approaches:
Example:
"As per the law, it is a crime for an American to sell weapons to hostile nations.
Country A, an enemy of America, has some missiles, and all the missiles were
sold to it by Robert, who is an American citizen."
Step-1:
In the first step we will start with the known facts and will choose the sentences
which do not have implications, such as: American(Robert), Enemy(A,
America), Owns(A, T1), and Missile(T1). All these facts will be represented
as below.
Step-2:
At the second step, we will see those facts which infer from available facts and
with satisfied premises.
Rule-(1) does not satisfy premises, so it will not be added in the first iteration.
Step-3:
Backward Chaining:
Example:
"As per the law, it is a crime for an American to sell weapons to hostile nations.
Country A, an enemy of America, has some missiles, and all the missiles were
sold to it by Robert, who is an American citizen."
Backward-Chaining proof:
Step-1:
At the first step, we will take the goal fact. And from the goal fact, we will infer
other facts, and at last, we will prove those facts true. So our goal fact is "Robert
is Criminal," so following is the predicate of it.
Step-2:
At the second step, we will infer other facts form goal fact which satisfies the
rules. So as we can see in Rule-1, the goal predicate Criminal (Robert) is
present with substitution {Robert/P}. So we will add all the conjunctive facts
below the first level and will replace p with Robert.
Step-3:
At step-3, we will extract further fact Missile(q) which infer from Weapon(q),
as it satisfies Rule-(5). Weapon (q) is also true with the substitution of a
constant T1 at q.
Step-4:
At step-4, we can infer facts Missile(T1) and Owns(A, T1) form Sells(Robert,
T1, r) which satisfies the Rule- 4, with the substitution of A in place of r. So
these two statements are proved here.
Step-5:
Here in the given tree, the starting node is A and the depth initialized to 0. The
goal node is R where we have to find the depth and the path to reach it. The
depth from the figure is 4. In this example, we consider the tree as a finite tree,
while we can consider the same procedure for the infinite tree as well. We knew
that in the algorithm of IDDFS we first do DFS till a specified depth and then
increase the depth at each loop. This special step forms the part of DLS or
Depth Limited Search. Thus, the following traversal shows the IDDFS search.
Assignment 2:
Q7 Describe the components of an expert system using medical diagnosis
system as an example.
An expert system is made up of three parts:
o A user interface - This is the system that allows a non-expert user
to query (question) the expert system, and to receive advice. The
user-interface is designed to be a simple to use as possible.
o A knowledge base - This is a collection of facts and rules. The
knowledge base is created from information provided by human
experts
o An inference engine - This acts rather like a search engine,
examining the knowledge base for information that matches the
user's query
The non-expert user queries the expert system. This is done by asking a
question, or by answering questions asked by the expert system.
The inference engine uses the query to search the knowledge base and
then provides an answer or some advice to the user.
In Medical diagnosis system, the knowledge base would contain medical
information, the symptoms of the patient would be used as the query, and
the advice would be a diagnose of the patient’s illness.
Q8 Explain alpha beta pruning algorithm. Why it is suitable for 2 player game
Alpha-Beta Pruning Algorithm:
Alpha-beta pruning is a modified version of the minimax algorithm. It is
an optimization technique for the minimax algorithm.
As we have seen in the minimax search algorithm that the number of
game states it has to examine are exponential in depth of the tree. Since
we cannot eliminate the exponent, but we can cut it to half. Hence there is
a technique by which without checking each node of the game tree we
can compute the correct minimax decision, and this technique is called
pruning. This involves two threshold parameter Alpha and beta for future
expansion, so it is called alpha-beta pruning. It is also called as Alpha-
Beta Algorithm.
Alpha-beta pruning can be applied at any depth of a tree, and sometimes
it not only prune the tree leaves but also entire sub-tree.
The two-parameter can be defined as:
1.Alpha: The best (highest-value) choice we have found so far at any point
along the path of Maximizer. The initial value of alpha is -∞.
2.Beta: The best (lowest-value) choice we have found so far at any point
along the path of Minimizer. The initial value of beta is +∞.
The Alpha-beta pruning to a standard minimax algorithm returns the
same move as the standard algorithm does, but it removes all the nodes
which are not really affecting the final decision but making algorithm
slow. Hence by pruning these nodes, it makes the algorithm fast.
Types of Agents
Agents can be grouped into four classes based on their degree of perceived
intelligence and capability :
Simple Reflex Agents
Model-Based Reflex Agents
Goal-Based Agents
Utility-Based Agents
Learning Agent
Simple reflex agents ignore the rest of the percept history and act only on the
basis of the current percept. Percept history is the history of all that an agent
has perceived till date. The agent function is based on the condition-action
rule. A condition-action rule is a rule that maps a state i.e, condition to an
action. If the condition is true, then the action is taken, else not. This agent
function only succeeds when the environment is fully observable. For simple
reflex agents operating in partially observable environments, infinite loops are
often unavoidable. It may be possible to escape from infinite loops if the agent
can randomize its actions. Problems with Simple reflex agents are :
Very limited intelligence.
No knowledge of non-perceptual parts of state.
Usually too big to generate and store.
If there occurs any change in the environment, then the collection of
rules need to be updated.
These kind of agents take decision based on how far they are currently from
their goal(description of desirable situations). Their every action is intended to
reduce its distance from the goal. This allows the agent a way to choose among
multiple possibilities, selecting the one which reaches a goal state. The
knowledge that supports its decisions is represented explicitly and can be
modified, which makes these agents more flexible. They usually require search
and planning. The goal-based agent’s behavior can easily be changed.