0% found this document useful (0 votes)
9 views38 pages

Module III

The document covers adversarial search in AI, focusing on game strategies like minimax algorithms and AO* search. It discusses knowledge representation issues, including various approaches and the importance of mapping facts to representations. Additionally, it highlights the significance of logical agents and the challenges faced in knowledge representation.

Uploaded by

hemantdharmdas27
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
9 views38 pages

Module III

The document covers adversarial search in AI, focusing on game strategies like minimax algorithms and AO* search. It discusses knowledge representation issues, including various approaches and the importance of mapping facts to representations. Additionally, it highlights the significance of logical agents and the challenges faced in knowledge representation.

Uploaded by

hemantdharmdas27
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 38

Module –III

• Adversarial Search:
– Game- mini max algorithms.
– AO * Search .

• Knowledge representation issues:


– Representation & Mapping.
– Approaches to knowledge representation.
– Issues in knowledge representation.

• Logic Agents:
– Knowledge based agents,
– Wampus world, logics.
Adversarial search
• Adversarial search is a search, where we examine the problem which arises when
we try to plan ahead of the world and other agents are planning against us.
• In previous topics, we have studied the search strategies which are only associated
with a single agent that aims to find the solution which often expressed in the form
of a sequence of actions.
• But, there might be some situations where more than one agent is searching for
the solution in the same search space, and this situation usually occurs in game
playing.
• The environment with more than one agent is termed as multi-agent environment,
in which each agent is an opponent of other agent and playing against each other.
Each agent needs to consider the action of other agent and effect of that action on
their performance.
• So, Searches in which two or more players with conflicting goals are trying to
explore the same search space for the solution, are called adversarial searches,
often known as Games.
• Games are modeled as a Search problem and heuristic evaluation function, and
these are the two main factors which help to model and solve games in AI.
Types of games
• Perfect information: A game with the perfect information is that in which
agents can look into the complete board. Agents have all the information
about the game, and they can see each other moves also. Examples are
Chess, Checkers, Go, etc.
• Imperfect information: If in a game agents do not have all information about
the game and not aware with what's going on, such type of games are called
the game with imperfect information, such as tic-tac-toe, Battleship, blind,
Bridge, etc.
• Deterministic games: Deterministic games are those games which follow a
strict pattern and set of rules for the games, and there is no randomness
associated with them. Examples are chess, Checkers, Go, tic-tac-toe, etc.
• Non-deterministic games: Non-deterministic are those games which have
various unpredictable events and has a factor of chance or luck. This factor
of chance or luck is introduced by either dice or cards. These are random,
and each action response is not fixed. Such games are also called as
stochastic games.
Example: Backgammon, Monopoly, Poker, etc.
Formalization of the problem
• A game can be defined as a type of search in AI which can be formalized of the
following elements:
– Initial state: It specifies how the game is set up at the start.
– Player(s): It specifies which player has moved in the state space.
– Action(s): It returns the set of legal moves in state space.
– Result(s, a): It is the transition model, which specifies the result of
moves in the state space.
– Terminal-Test(s): Terminal test is true if the game is over, else it is
false at any case. The state where the game ends is called terminal
states.
– Utility(s, p): A utility function gives the final numeric value for a
game that ends in terminal states s for player p. It is also called
payoff function. For Chess, the outcomes are a win, loss, or draw
and its payoff values are +1, 0, ½. And for tic-tac-toe, utility values
are +1, -1, and 0.
Game tree
• A game tree is a tree where nodes of the tree are the game states and Edges
of the tree are the moves by players. Game tree involves initial state, actions
function, and result Function.

Example: Tic-Tac-Toe game tree:


• The following figure is showing part of the game-tree for tic-tac-toe game.
Following are some key points of the game:
• There are two players MAX and MIN.
• Players have an alternate turn and start with MAX.
• MAX maximizes the result of the game tree
• MIN minimizes the result.
Example Explanation:
• From the initial state, MAX has 9 possible moves as he starts first. MAX
place x and MIN place o, and both player plays alternatively until we reach a
leaf node where one player has three in a row or all squares are filled.
• Both players will compute each node, minimax, the minimax value which is
the best achievable utility against an optimal adversary.
• Suppose both the players are well aware of the tic-tac-toe and playing the
best play. Each player is doing his best to prevent another one from winning.
MIN is acting against Max in the game.
• So in the game tree, we have a layer of Max, a layer of MIN, and each layer
is called as Ply. Max place x, then MIN puts o to prevent Max from winning,
and this game continues until the terminal node.
• In this either MIN wins, MAX wins, or it's a draw. This game-tree is the
whole search space of possibilities that MIN and MAX are playing tic-tac-toe
and taking turns alternately.
• Hence adversarial Search for the minimax procedure works as follows:
• It aims to find the optimal strategy for MAX to win the game.
• It follows the approach of Depth-first search.
• In the game tree, optimal leaf node could appear at any depth of the tree.
• Propagate the minimax values up to the tree until the terminal node
discovered.
• In a given game tree, the optimal strategy can be determined from the
minimax value of each node, which can be written as MINIMAX(n). MAX
prefer to move to a state of maximum value and MIN prefer to move to a
state of minimum value then:
Mini-Max Algorithm in Artificial Intelligence
• Mini-max algorithm is a recursive or backtracking algorithm which is used in
decision-making and game theory.
• It provides an optimal move for the player assuming that opponent is also playing
optimally.
• Mini-Max algorithm uses recursion to search through the game-tree. Min-Max
algorithm is mostly used for game playing in AI. such as Chess, Checkers, tic-tac-
toe, go, and various tow-players game.
• This Algorithm computes the minimax decision for the current state.
• In this algorithm two players play the game, one is called MAX and other is called
MIN.
• Both the players fight it as the opponent player gets the minimum benefit while
they get the maximum benefit.
• Both Players of the game are opponent of each other, where MAX will select the
maximized value and MIN will select the minimized value.
• The minimax algorithm performs a depth-first search algorithm for the exploration
of the complete game tree.
• The minimax algorithm proceeds all the way down to the terminal node of the
tree, then backtrack the tree as the recursion.
• Working of Min-Max Algorithm :
– The working of the minimax algorithm can be easily described using an example.
– we have taken an example of game-tree which is representing the two-player game.
– In this example, there are two players one is called Maximizer and other is called Minimizer.
– Maximizer will try to get the Maximum possible score, and Minimizer will try to get the
minimum possible score.
– This algorithm applies DFS, so in this game-tree, we have to go all the way through the
leaves to reach the terminal nodes.
– At the terminal node, the terminal values are given so we will compare those value and
backtrack the tree until the initial state occurs.
pseudo-code for the MinMax algorithm:
function minimax(node, depth, maximizingPlayer)
if depth = 0 or node is a terminal node:
return the heuristic value of the node
if maximizingPlayer:
bestValue = -infinity
for each child node of node:
v = minimax(child, depth - 1, FALSE)
bestValue = max(bestValue, v)
return bestValue
else:
bestValue = +infinity
for each child node of node:
v = minimax(child, depth - 1, TRUE)
bestValue = min(bestValue, v)
return bestValue
AO* Search
• Its an informed search and works as best first search.
• AO* Algorithm is based on problem decomposition (Breakdown problem
into small pieces).
• Its an efficient method to explore a solution path.
• AO* is often used for the common path finding problem in applications such
as video games, but was originally designed as a general graph traversal
algorithm.
• It finds applications in diverse problems, including the problem of parsing
using stochastic grammars in NLP.
• Other cases include an Informational search with online learning.
• It is useful for searching game trees, problem solving etc.
• AND-OR Graph
– AND-OR graph is useful for representing the solution of problems that can
be solved by decomposing them into a set of smaller problems, all of which
must then be solved.

AND-OR Graph
– Node in the graph will point both down to its successors and up to its parent
nodes.
– Each Node in the graph will also have a heuristic value associated with it.
f(n)=g(n)+h(n)
f(n): Cost function. g(n): Actual cost or Edge value h(n): Heuristic/ Estimated
value of the node
• AO* Algorithm
1. Initialise the graph to start node
2. Traverse the graph following the current path accumulating nodes
that have not yet been expanded or solved
3. Pick any of these nodes and expand it and if it has no successors
call this value FUTILITY otherwise calculate only f` for each of the
successors.
4. If f` is 0 then mark the node as SOLVED
5. Change the value of f` for the newly created node to reflect its
successors by back propagation.
6. Wherever possible use the most promising routes and if a node is
marked as SOLVED then mark the parent node as SOLVED.
7. If starting node is SOLVED or value greater than FUTILITY, stop, else
repeat from 2
Ch-II: Knowledge representation issues
What is knowledge representation?
Humans are best at understanding, reasoning, and interpreting knowledge. Human knows
things, which is knowledge and as per their knowledge they perform various actions in the
real world. But how machines do all these things comes under knowledge representation and
reasoning.

• In order to solve complex problems encountered in artificial intelligence, one needs both a large
amount of knowledge and some mechanism for manipulating that knowledge to create solutions.
• Knowledge and Representation are two distinct entities. They play central but distinguishable
roles in the intelligent system.
• Knowledge is a description of the world. It determines a system’s competence by what it
knows.
• Moreover, Representation is the way knowledge is encoded. It defines a system’s performance
in doing something.
• Different types of knowledge require different kinds of representation.
Representation & Mapping:

A variety of ways of representing knowledge (facts) have been exploited in AI programs. In all
variety of knowledge representations , we deal with two kinds of entities.

A. Facts: Truths in some relevant world. These are the things we want to represent.
B. Representations : Things we can manipulate.
Example:
sky is blue sky(blue)

One way to think of structuring these entities is at two levels :


(a) the knowledge level, at which facts are described, and
(b) the symbol level, at which representations of objects at the knowledge level are defined in
terms of symbols that can be manipulated by programs.

The facts and representations are linked with two-way mappings. This link is called
representation mappings.

The forward representation mapping maps from facts to representations.

The backward representation mapping goes the other way, from representations to facts.
• Forward and backward representation are elaborated below:

1 4

2 3

• Example:

• X & Y are the brothers. 1

• Represent in symbolic level. 2

• Find the new inference by using operations.


ex: Y is the brother of X 3
• Represent again in symbolic level (newly created inference)

• Final fact is, “Y is a brother of X” 4


Representations

• Sets of syntax and semantics


– Conversion which makes it possible to describe things.

– Syntax: specified symbols and rules.

– Semantics: how meaning is associated with symbol arrangement allowed by syntax.


Representation and mapping
• Spot is a dog
dog(Spot)

• Every dog has tail


∀x: dog(x)-> hastail(x)

• Spot has a tail


hastail(Spot)

[it is new knowledge]


Using backward mapping function to generate English sentence
• Approaches to knowledge representation.(VIMP)
– Simple relational knowledge:
• Information's are represents in the form table.
• It is simple ways to sorting facts using relational methods.
• When each and every fact about a set of object is set out sequentially.
Example: Players Height Weight Winning
XYZ 6.1 70 20 times
PQR 5.9` 71 25 times
– Inheritable knowledge:
• All data should be stores into a hierarchical of classes.
• Classes must be arranged in a generalization.
• Every individual frame can represent the collection of attribute and its value.
Example:
Player

Is a Class
Crickter
Is a Is a

objects L.Batsman R.Bats man

values Middle
opener
order
– Inferential knowledge:
• Represent knowledge in the form of formal logic.
• Logic can predicate or propositional logic.

Ex: all dogs have a tail


∀X: dog(x)-> has a tail(x)

• Inheritance property is very useful form of inference.


– Procedural knowledge:
• Procedural knowledge involves knowing how to do something.
• It can explain different way in program.
• It follows implicit learning.(do the task repeatedly to improve the performance)
• By using the keywords: IF…THEN and AND/OR rule, we frame the procedure
• Example:
– Medical analysis:

IF: Person have headache and high fever


THEN: Person Viral Fever.

IF: Light is red


THEN: Stop
• Issues in knowledge representation:
The fundamental goal of knowledge Representation is to facilitate inference (conclusions) from
knowledge.
1. Important Attributes:
• Are there any attributes so basic,
that occurs in many different types of problem?
• If there are we need to make sure that they
are handed appropriately in each of problem.
• There are two such attributes as instance and is a,
which are important because each supports property of
inheritance
2. Relationship among attributes:
• The attributes we use to describes
object are themselves act as entity.
• The relationship between attributes of an object,
is independent of specific knowledge they encode and
may hold properties like
– Inverse
– Existence in a “ is a” hierarchy
– Technique for reasoning about values.
– Single value attributes.
– Inverse: What are Inverse Relationships?
• In the real world, entities are connected by relationships. Many of these relationships
have inverse forms.
• Example: "Student reads Book"
→ Inverse: "Book is read by Student“
• These inverse relations are important for making complete inferences in AI systems.
• Inverse relations allow two-way understanding (A → B and B → A).
– Technique for reasoning about values:
• Some constraints has to be followed by the value of an attributes.
• Information about types of value.
• Ex: values of height must be a number measured in unit of length.
• Constraints on the value.
• Ex: age of person can not be greater than his parents age.
– Single values attributes:
• This is about attributes that is guaranteed to take unique value.
• Ex: baseball player can at a time have single height.
3. Granularity of representation: Granularity refers to the level of detail used to represent
knowledge. It determines how fine or coarse your representation is.
• High-level facts (Coarse-Grained)may not be adequate for inference.
Example: "John traveled to Paris."
This doesn't say how he traveled, when, or why.

• Low level primitives(Fine-Grained) may require a lot of storage.


Example: "John bought a plane ticket" → "John boarded the plane" → "John arrived in
Paris"
More detailed, supports inference like “John flew by plane,” but needs more space.

• At what level of details the knowledge should be presented & what should be our
primitive.
4. Representing set of objects:
• There are certain properties of an object which are true as member of set but not as
individual.
• Ex: a person is very strict at a company but he is very calm at his home.
5. Finding right structure as needed:
• The structure of knowledge refers to how the information is organized, linked, and
represented so that it can be stored, accessed, and reasoned with effectively.
Ch-III: Logic Agents
Knowledge based agents:
What is agents?
An agent is anything that can perceive its environment through sensor
and acts upon that environment through effectors.

Intelligent agent?
An intelligent agent is one that is capable of flexible autonomous action
in order to meet its design objectives, where flexible means three things:
• Reactivity: agents are able to perceive their environment, & respond in a
timely.
• Pro-activeness: intelligent agents are able to exhibits goal-directed behavior
by taking the initiative in order to satisfy their design objectives.
• Social ability: agents capable of interacting with other agents in order to
satisfy their design objectives.
• knowledge-based agents are composed of two main parts:
– Knowledge-base and
– Inference system.
• Knowledge-base and
A knowledge-based agent must able to do the following:
– An agent should be able to represent states, actions, etc.
– An agent Should be able to incorporate new percepts
– An agent can update the internal representation of the world
– An agent can work out the internal representation of the world
– An agent can deduce appropriate actions.
• Inference system
– Inference means deriving new sentences from old. Inference system allows us to add a new
sentence to the knowledge base. A sentence is a proposition about the world. Inference
system applies logical rules to the KB to deduce new information.
– Inference system generates new facts so that an agent can update the KB. An inference
system works mainly in two rules which are given as:
• Forward chaining
• Backward chaining
Inference in AI: Inference is the process of deriving new information from known facts and rules.
We typically use rules in the format:
IF condition THEN action/conclusion
Forward Chaining (Data-driven reasoning): Starts with known facts and applies rules to infer
new facts, continuing until a goal is reached or no more rules apply.

How it works:
• Match known facts with IF conditions of rules.
• Set the rules whose conditions are satisfied.
• Add the THEN part (conclusion) to the fact base.
• Repeat until the desired goal is derived (or nothing more can be inferred).

Backward Chaining: Starts with a goal or hypothesis and works backwards to see if known facts
support it.
How it works:
• Start with the goal.
• Look for rules whose THEN part matches the goal.
• Try to prove the IF conditions of those rules.
• Repeat the process until you reach known facts.
Forward Chaining Example:
Rules:
• IF sky is cloudy THEN it might rain
• IF it might rain THEN take umbrella
Facts:
Sky is cloudy
• Forward chaining result: → It might rain
→ Take umbrella
• Suitable for diagnostic systems, prediction, etc.

Backward Chaining Example:


Goal: Take umbrella
→ Why? Because it might rain
→ Why? Because the sky is cloudy
→ Is the sky cloudy? (Check fact base)
→ Yes → Goal proven
• Suitable for problem-solving, question answering, diagnosis.
The architecture of knowledge-based agent:

• The above diagram is representing a generalized architecture for a knowledge-based


agent. The knowledge-based agent (KBA) take input from the environment by
perceiving the environment. The input is taken by the inference engine of the agent and
which also communicate with KB to decide as per the knowledge store in KB. The
learning element of KBA regularly updates the KB by learning new knowledge.
• Knowledge base: Knowledge-base is a central component of a knowledge-based agent,
it is also known as KB. It is a collection of sentences (here 'sentence' is a technical term
and it is not identical to sentence in English). These sentences are expressed in a
language which is called a knowledge representation language. The Knowledge-base of
KBA stores fact about the world.
Operations Performed by KBA:
Following are three operations which are performed by KBA in order to
show the intelligent behavior:
• TELL: This operation tells the knowledge base what it perceives from the
environment.
• ASK: This operation asks the knowledge base what action it should perform.
• Perform: It performs the selected action.
Various levels of knowledge-based agent:
• A knowledge-based agent can be viewed at different levels which
are given below:
1. Knowledge level
– Knowledge level is the first level of knowledge-based agent, and in this level,
we need to specify what the agent knows, and what the agent goals are.
2. Logical level:
– At this level, we understand that how the knowledge representation of
knowledge is stored. At this level, sentences are encoded into different
logics.
3. Implementation level:
– This is the physical representation of logic and knowledge. At the
implementation level agent perform actions as per logical and knowledge
level.
Wampus world
• The Wumpus world is a simple world example to illustrate the worth of a
knowledge-based agent and to represent knowledge representation. It
was inspired by a video game Hunt the Wumpus by Gregory Yob in 1973.
• The Wumpus world is a cave which has 4/4 rooms connected with
passageways. So there are total 16 rooms which are connected with each
other. We have a knowledge-based agent who will go forward in this
world. The cave has a room with a beast which is called Wumpus, who
eats anyone who enters the room. The Wumpus can be shot by the agent,
but the agent has a single arrow. In the Wumpus world, there are some
Pits rooms which are bottomless, and if agent falls in Pits, then he will be
stuck there forever. The exciting thing with this cave is that in one room
there is a possibility of finding a heap of gold. So the agent goal is to find
the gold and climb out the cave without fallen into Pits or eaten by
Wumpus. The agent will get a reward if he comes out with gold, and he
will get a penalty if eaten by Wumpus or falls in the pit.
Following is a sample diagram for representing the Wumpus world. It is
showing some rooms with Pits, one room with Wumpus and one
agent at (1, 1) square location of the world.
There are also some components which can help the agent to
navigate the cave. These components are given as follows:
• The rooms adjacent to the Wumpus room are smelly, so that it
would have some stench.
• The room adjacent to PITs has a breeze, so if the agent reaches near
to PIT, then he will perceive the breeze.
• There will be glitter in the room if and only if the room has gold.
• The Wumpus can be killed by the agent if the agent is facing to it,
and Wumpus will emit a horrible scream which can be heard
anywhere in the cave.
• PEAS description of Wumpus world:
– To explain the Wumpus world we have given PEAS description as below:
Performance measure:
• +1000 reward points if the agent comes out of the cave with the gold.
• -1000 points penalty for being eaten by the Wumpus or falling into the
pit.
• -1 for each action, and -10 for using an arrow.
• The game ends if either agent dies or came out of the cave.
Environment:
• A 4*4 grid of rooms.
• The agent initially in room square [1, 1], facing toward the right.
• Location of Wumpus and gold are chosen randomly except the first
square [1,1].
• Each square of the cave can be a pit with probability 0.2 except the
first square.
Actuators:
• Left turn,
• Right turn
• Move forward
• Grab
• Release
• Shoot.
Sensors:
– The agent will perceive the stench if he is in the room adjacent to the Wumpus.
(Not diagonally).
– The agent will perceive breeze if he is in the room directly adjacent to the Pit.
– The agent will perceive the glitter in the room where the gold is present.
– The agent will perceive the bump if he walks into a wall.
– When the Wumpus is shot, it emits a horrible scream which can be perceived
anywhere in the cave.
– These percepts can be represented as five element list, in which we will have
different indicators for each sensor.
– Example if agent perceives stench, breeze, but no glitter, no bump, and no scream
then it can be represented as:
[Stench, Breeze, None, None, None].

You might also like