AI Chapter 3
AI Chapter 3
Here are some examples of goals to be achieved by an intelligent goal based agents.
To drive from city A to city B crossing many intermediate transition states.
2. Problem formulation
is a step that puts down the actions and states that the agent has to consider given a goal.
3. Search
isthe process of looking for the various sequence of actions that lead to a goal state,
evaluating them and choosing the optimal sequence.
4. Execute
isthe final step that the agent executes the chosen sequence of actions to get it to the
solution/goal
6
Real World problem: Examples
1.Touring Problem:
Imagine an agent in the city of Arad, Romania enjoying a touring holiday. The
agent’s performance measure contain many factors - enjoying, sights visited etc.,
Now, suppose the agent has a nonrefundable ticket to fly out to Bucharest the
following day. In that case, it makes sense for the agent to adopt the goal of
getting to Bucharest.
1.Goal Formulation
8
2. Problem Formulation
For the agent driving in Romania, it’s reasonable to suppose that each
city on the map has a sign indicating its presence to arriving drivers.
Discrete:- at any given state there are only finitely many actions to
choose from.
It means that if an agent chooses to drive from Arad to Sibiu, it does end
up in Sibiu.
3. Search: The process of looking for a sequence of actions that reaches the
goal is called search.
11
Con’t…
Open-Loop: While the agent is executing the solution sequence it ignores its
percepts when choosing an action because it knows in advance what they
will be is called open-loop system, because ignoring the percepts breaks
the loop between agent and environment.
12
Fig. 1 A simple problem solving agent. It first formulates a goal and a problem,
searches for a sequence of actions that would solve the problem, and then executes
the actions one at a time. When this is complete, it formulates another goal and starts
over
13
Well-defined problems and solutions(Problem
Formulation
A problem can be defined formally by five components.
1. Initial State: That the agent starts in.
In our example, the initial state for our agent in Romania might be described as In
(Arad).
2. Actions: A description of possible actions available to the agent.
Given a particular state s, ACTIONS (s) returns the set of actions that can be
executed in s. We say that each of these actions is applicable in s.
In our example, from the state In (Arad), the applicable actions are
{Go (Sibiu), Go (Timisoara), Go (Zerind)}
14
Con’t…
3. Transition model or Successor: A description of what each
action does, specified by a function.
RESULT (s, a) that returns the state that results from doing
action a in state s.
Sometimes there is an explicit set of possible goal states, and the test simply checks whether
the given state is one of them.
In our example, the agent’s goal in Romania is the singleton set {In (Bucharest)}.
For the agent trying to get to Bucharest, time is of the essence, so the cost of a path might
be its length in kilometers.
We assume that the cost of a path can be described as the sum of the costs of the
individual actions along the path. The step cost of taking action a in state s to reach state s1
is denoted by c(s, a, s1).
16
State Space
Together the initial state, actions and transition model implicitly define the state
space of the problem -the set of all states reachable from the initial state by any
sequence of actions.
It represents a problem in terms of states and operators that change states.
The state space forms a directed network or graph in which the nodes are
states and the links between nodes are actions. (The map of Romania shown
in Figure below can be interpreted as a state-space graph)
Generally, A state space is a graph, (V, E), where V is a set of nodes and E is
a set of arcs, where each arc is directed from a node to another node
For example, consider the map of Romania in Figure.2
18
The problem formulation is therefore:
Path cost: c(Arad, AradZerind, Zerind) = 75, c(Arad, AradSibiu, Sibiu) = 140,
c(Arad, Arad Timisoara, Timisoara) = 118, etc.
19
Con’t…
For example, if the agent is in the state Arad, there are 3 possible
actions,
20
Cont…
A solution to a problem
is an action sequence that leads from the initial state to a goal state.
States: Each state obviously includes a location (airport) and the current
time. Furthermore, the state must record extra information like, base fare,
flight segment, their status as domestic or international, to decide the cost
of an action.
Initial state: This is specified by the user’s query.
Actions: Take any flight from the current location, in any seat class,
leaving the current time, leaving enough time for within airport
transfer if needed.
23
Con’t…
Transition model: The state resulting from taking a flight will have the
flight’s destination as the current location and the flight’s arrival time as
the current time.
Path cost: This depends on monetary cost, waiting time, flight time,
customs and immigration procedures, seat quality, time of day, type of
airplane, frequent flyer mileage awards and so on.
24
Toy problems: Example
25
Con’t…
States: 3 x 3 array configuration each of the eight tiles on the board and
the blank in one of the nine squares.
Transition Model: Given a state and action, this returns the resulting state.
Example: If we apply left to the start state in figure, the resulting state has
the 5 and the blank switched.
26
Con’t…
Goal Test: This checks whether the state matches the goal
configuration as shown in figure.
Path cost: Each cost costs 1, so the path cost in the number of steps
in the path.
States: The state is determined by both the agent location and the dirt locations.
The agent is in one of two locations, each of which might or might not contain dirt.
Thus there are 2 × 2 2 = 8 possible world states. A larger environment with n
locations has n · 2 n states where n is the number of rooms.
28
Con’t…
Con’t…
Initial state: Any state can be designated and taken as the initial state.
Actions: In this simple environment, each state has just three actions: move Left, Right, and
Suck. However, larger environments can also include other actions such as move Up and
move Down in the state space.
Transition State: The actions have their expected effects, except that moving Left in the
leftmost square, moving Right in the rightmost square, and sucking in a clean square have no
effect.
Goal Test: This checks whether all the rooms are clean or not.
Path Cost: Each step costs a unit cost (1), so the path cost is the number of steps in the path.
Move right cost=>1, Move Left cost=>1.
30
Fig . The state space for the vacuum world
Links denote actions: L = Left, R = Right, S = Suck
Con’t…
What will happen if the agent is initially at [state = 5] and formulates action
sequence [Right, Suck]? Agent calculates and knows that it will get to a goal state.
If the agent moves Right then, it is on state {6}
After taking the action Suck the dirt in state six, the agent is finally on state {8}
N.B: If the environment is completely observable, the vacuum cleaner always knows
where it is and where the dirt is. The solution then is reduced to searching for a path
from the initial state to the goal state easily.
Exercise: River Crossing Puzzles
Missionary-and-cannibal problem
Three missionaries and three cannibals are on one side of a river that they wish to
cross. There is a boat that can hold one or two people. Find an action sequence that
brings everyone safely to the opposite bank (i.e. Cross the river). But you must never
leave a group of missionaries outnumbered by cannibals on the same bank (in any
place).
Identify the set of possible states and operators
Construct the state space of the problem using suitable representation
Exercise:
A farmer returns from the market, where he bought a goat, a cabbage and a wolf. On
the way home he must cross a river. His boat is small and unable to transport more
than one of his purchases. He cannot leave the goat alone with the cabbage (because
the goat would eat it), nor he can leave the goat alone with the wolf (because the
goat would be eaten). How can the farmer get everything safely on the other side?
CONT….
THANK YOU
?