Ai-Unit 5
Ai-Unit 5
Ai-Unit 5
UNIT IV
1. Representing knowledge using rules: Procedural Vs Declarative
knowledge
We have discussed various search techniques in previous units. Now we would
consider a set of rules that represent,
1. Knowledge about relationships in the world and
2. Knowledge about how to solve the problem using the content of the rules.
1
I-M.Sc., _AMC (Dr.K.Aravinthan) Representing Knowledge Using rules/AI
Declarative Knowledge
▪ A statement in which knowledge specified, but the use to which that knowledge is to
be put is not given.
▪ For example, laws, people’s name; these are the facts which can stand alone, not
dependent on other knowledge;
▪ So to use declarative representation, we must have a program that explains what is to
do with the knowledge and how.
▪ For example, a set of logical assertions can combine with a resolution theorem prover
to give a complete program for solving problems but in some cases, the logical
assertions can view as a program rather than data to a program.
▪ Hence the implication statements define the legitimate reasoning paths and automatic
assertions provide the starting points of those paths.
▪ These paths define the execution paths which is similar to the ‘if then else “in
traditional programming.
▪ So logical assertions can view as a procedural representation of knowledge.
2. Logic programming
Logic Programming – Representing Knowledge Using Rules
❖ Logic programming is a programming paradigm in which logical assertions viewed as
programs.
❖ These are several logic programming systems, PROLOG is one of them.
❖ A PROLOG program consists of several logical assertions where each is a horn
clause
i.e. a clause with at most one positive literal.
❖ Ex : P, P V Q, P → Q
❖ The facts are represented on Horn Clause for two reasons.
1. Because of a uniform representation, a simple and efficient interpreter can write.
2. The logic of Horn Clause decidable.
❖ Also, The first two differences are the fact that PROLOG programs are actually sets
of
Horn clause that have been transformed as follows:-
1. If the Horn Clause contains no negative literal then leave it as it is.
2
I-M.Sc., _AMC (Dr.K.Aravinthan) Representing Knowledge Using rules/AI
3
I-M.Sc., _AMC (Dr.K.Aravinthan) Representing Knowledge Using rules/AI
4
I-M.Sc., _AMC (Dr.K.Aravinthan) Representing Knowledge Using rules/AI
There are four factors influencing the type of reasoning. They are,
1. Are there more possible start or goal state? We move from smaller set of sets to the
length.
2. In what direction is the branching factor greater? We proceed in the direction with
the lower branching factor.
3. Will the program be asked to justify its reasoning process to a user? If, so then it is
selected since it is very close to the way in which the user thinks.
4. What kind of event is going to trigger a problem-solving episode? If it is the arrival
of a new factor, the forward reasoning makes sense. If it is a query to which a
response is desired, backward reasoning is more natural.
Example 1 of Forward versus Backward Reasoning
❖ It is easier to drive from an unfamiliar place from home, rather than from home to an
unfamiliar place. Also, If you consider a home as starting place an unfamiliar place as
a goal then we have to backtrack from unfamiliar place to home.
Example 2 of Forward versus Backward Reasoning
❖ Consider a problem of symbolic integration. Moreover, The problem space is a set of
formulas, which contains integral expressions. Here START is equal to the given
formula with some integrals. GOAL is equivalent to the expression of the formula
without any integral. Here we start from the formula with some integrals and proceed
to an integral free expression rather than starting from an integral free expression.
Example 3 of Forward versus Backward Reasoning
❖ The third factor is nothing but deciding whether the reasoning process can justify its
reasoning. If it justifies then it can apply. For example, doctors are usually unwilling
to accept any advice from diagnostics process because it cannot explain its reasoning.
Example 4 of Forward versus Backward Reasoning
❖ Prolog is an example of backward chaining rule system. In Prolog rules restricted to
Horn clauses. This allows for rapid indexing because all the rules for deducing a
given fact share the same rule head. Rules matched with unification procedure.
Unification tries to find a set of bindings for variables to equate a sub-goal with the
head of some rule. Rules in the Prolog program matched in the order in which they
appear.
5
I-M.Sc., _AMC (Dr.K.Aravinthan) Representing Knowledge Using rules/AI
4. Matching
Conflict resolution
6
I-M.Sc., _AMC (Dr.K.Aravinthan) Representing Knowledge Using rules/AI
▪ Position of objects
➢ Preferences based on action:
▪ Evaluation of states
Can be matched, but not in a manner that satisfies the constraint imposed by the variable
y.
5. Control knowledge.
➢ Knowledge about which paths are most likely to find the goal state is often called
search control knowledge
— Which states are more preferable to others.
— Knowledge about which rules to apply in a given situation.
— Knowledge about the order in which to pursue sub goals.
— Knowledge about useful sequence of rules to apply.
Search control knowledge = Meta knowledge
1. Long term memory Rules
2. Short term memory Working memory
7
I-M.Sc., _AMC (Dr.K.Aravinthan) Game playing/AI
UNIT V
1. Game playing
• Computer programs which play 2-player games
– game-playing as search
– with the complication of an opponent
• General principles of game-playing and search
– evaluation functions
– minimax principle
– alpha-beta-pruning
– heuristic techniques
• Status of Game-Playing Systems
– in chess, checkers, backgammon, Othello, etc, computers routinely defeat
leading world players
• Applications?
– think of “nature” as an opponent
– economics, war-gaming, medical drug treatment
1
I-M.Sc., _AMC (Dr.K.Aravinthan) Game playing/AI
Figure 1. Exhaustive minimax for the game of nim. Bold lines indicate forced win for MAX. Each node is
marked with its derived value (0 or 1) under minimax.
Types of Games
Game Trees
Figure 2. A (Partial) search tree for the game of Tic-Tac-Toe. The top node is the initial state, and MAX moves first,
placing an X in some square. We show part of the search tree, giving alternating moves by MIN (O) and MAX, until
we eventually reach terminal states, which can be assigned utilities according to the rules of the game.
2
I-M.Sc., _AMC (Dr.K.Aravinthan) Game playing/AI
Minimax algorithm
• Designed to find the optimal strategy for Max and find best move:
– 1. Generate the whole game tree to leaves
– 2. Apply utility (payoff) function to leaves
– 3. Back-up values from leaves toward the root:
• a Max node computes the max of its child values
• a Min node computes the Min of its child values
– 4. When value reaches the root: choose max value and the corresponding
move.
Properties of minimax
3
I-M.Sc., _AMC (Dr.K.Aravinthan) Game playing/AI
– Chess:
• b ~ 35 (average branching factor)
• d ~ 100 (depth of game tree for typical game)
• bd ~ 35100 ~10154 nodes!!
– Tic-Tac-Toe
• ~5 legal moves, total of 9 moves
• 59 = 1,953,125
• 9! = 362,880 (Computer goes first)
• 8! = 40,320 (Computer goes second)
• An Evaluation Function:
– estimates how good the current board configuration is for a player.
– Typically, one figures how good it is for the player, and how good it is for the
opponent, and subtracts the opponents score from the players
– Othello: Number of white pieces - Number of black pieces
– Chess: Value of all white pieces - Value of all black pieces
• Typical values from -infinity (loss) to +infinity (win) or [-1, +1].
• If the board evaluation is X for a player, it’s -X for the opponent
• Example:
– Evaluating chess boards,
– Checkers
– Tic-tac-toe
Evaluation functions
4
I-M.Sc., _AMC (Dr.K.Aravinthan) Game playing/AI
• Idea:
– Do Depth first search to generate partial game tree,
– Give static evaluation function to leaves,
– compute bound on internal nodes.
• Alpha, Beta bounds:
– Alpha value for Max node means that Max real value is at least alpha.
– Beta for Min node means that Min can guarantee a value below Beta.
• Computation:
– Alpha of a Max node is the maximum value of its seen children.
– Beta of a Min node is the lowest value seen of its child node.
3. Expert System
Expert system = knowledge + problem-solving methods. ... A knowledge base that
captures the domain-specific knowledge and an inference engine that consists of algorithms
5
I-M.Sc., _AMC (Dr.K.Aravinthan) Game playing/AI
for manipulating the knowledge represented in the knowledge base to solve a problem
presented to the system.
Expert systems (ES) are one of the prominent research domains of AI. It is introduced
by the researchers at Stanford University, Computer Science Department.
DEFINITION
An expert system is a computer program that simulates the judgement and behavior of
a human or an organization that has expert knowledge and experience in a particular field.
6
I-M.Sc., _AMC (Dr.K.Aravinthan) Game playing/AI
Knowledge Base
It contains domain-specific and high-quality knowledge. Knowledge is required to
exhibit intelligence. The success of any ES majorly depends upon the collection of highly
accurate and precise knowledge.
What is Knowledge?
The data is collection of facts. The information is organized as data and facts about
the task domain. Data, information, and past experience combined together are termed as
knowledge.
Forward Chaining
It is a strategy of an expert system to answer the question, “What can happen next?”
Here, the Inference Engine follows the chain of conditions and derivations and finally
deduces the outcome. It considers all the facts and rules, and sorts them before concluding to
a solution. This strategy is followed for working on conclusion, result, or effect.
For example, prediction of share market status as an effect of changes in interest rates.
Backward Chaining
With this strategy, an expert system finds out the answer to the question, “Why this
happened?” On the basis of what has already happened, the Inference Engine tries to find out
which conditions cou
ld have happened in the past for this result. This strategy is followed for finding out
cause or reason.
7
I-M.Sc., _AMC (Dr.K.Aravinthan) Game playing/AI
User Interface
User interface provides interaction between user of the ES and the ES itself. It is
generally Natural Language Processing so as to be used by the user who is well-versed in the
task domain. The user of the ES need not be necessarily an expert in Artificial Intelligence. It
explains how the ES has arrived at a particular recommendation. The explanation may appear
in the following forms
• Natural language displayed on screen.
• Verbal narrations in natural language.
• Listing of rule numbers displayed on the screen.
The user interface makes it easy to trace the credibility of the deductions.
Requirements of Efficient ES User Interface
• It should help users to accomplish their goals in shortest possible way.
• It should be designed to work for user’s existing or desired work practices.
• Its technology should be adaptable to user’s requirements; not the other way round.
• It should make efficient use of user input.
8
I-M.Sc., _AMC (Dr.K.Aravinthan) Game playing/AI
9
I-M.Sc., _AMC (Dr.K.Aravinthan) Game playing/AI
The action in turn may cause a change in the organism's perception, which can lead to
a different type of action.
The robot is supposed to find a cell next to a boundary or object and then follow that
boundary forever.
As we said, the robot can perceive the state of its neighboring cells:
The robot can move to a free adjacent cell in its column or row. Consequently, there are
four possible actions that it can take:
• North: moves the robot one cell up
• East: moves the robot one cell right
• South: moves the robot one cell down
• West: moves the robot one cell to the left
Now that we specified the robot's capabilities, its environment, and its task, we
need to give "life" to the robot.
In other words, we have to specify a function that maps sensory inputs to
movement actions so that the robot will carry out its task.
Since we do not want the robot to remember or learn anything, one such function
would be sufficient.
However, it is useful to decompose it in the following way
10
I-M.Sc., _AMC (Dr.K.Aravinthan) Game playing/AI
Perception
➢ For the robot task, there are four binar -valued features of the sensory values that
are useful for computing an appropriate action X1, X2, X,3, X4
➢ Perceptual processing might occasionally give erroneous, ambiguous, or
incomplete information about the robot's environment
♦ Such errors might evoke inappropriate actions
➢ For robots with more complex sensors and tasks, designing appropriate perceptual
processing can be challenging
Action
➢ Specifying a function that selects the appropriate boundary-following action
♦ None of the features has value 1, the robot can move in any direction until it
encounters a boundary
11