AL3391-artificial-intelligence unit-1
AL3391-artificial-intelligence unit-1
AL3391-artificial-intelligence unit-1
UNIT 1 - AI
Artificial Intelligence
(Anna University)
lOMoARcPSD|43239349
Artificial Intelligence:
“Artificial Intelligence is the ability of a computer to act like a human being”.
Artificial intelligence systems consist of people, procedures, hardware, software, data, and
knowledge needed to develop computer systems and machines that demonstrate the characteristics
of intelligence
Human Sensors:
Eyes, ears, and other organs for sensors.
Human Actuators:
Hands, legs, mouth, and other body parts.
Robotic Sensors:
Mic, cameras and infrared range finders for sensors
Robotic Actuators:
Motors, Display, speakers etc
Page 2 of 12
lOMoARcPSD|43239349
Rational Agent - A system is rational if it does the “right thing”. Given what it knows.
Characteristic of Rational Agent.
▪ The agent's prior knowledge of the environment.
▪ The performance measure that defines the criterion of success.
▪ The actions that the agent can perform.
▪ The agent's percept sequence to date.
For every possible percept sequence, a rational agent should select an action that is expected to
maximize its performance measure, given the evidence provided by the percept sequence and whatever built-
in knowledge the agent has.
An omniscient agent knows the actual outcome of its actions and can act accordingly; but
omniscience is impossible in reality.
Ideal Rational Agent precepts and does things. It has a greater performance measure. Eg.
Crossing road. Here first perception occurs on both sides and then only action.
No perception occurs in Degenerate Agent.
Eg. Clock. It does not view the surroundings. No matter what happens outside. The clock works based
on inbuilt program.
Ideal Agent describes by ideal mappings. “Specifying which action an agent ought to take in
response to any given percept sequence provides a design for ideal agent”.
Eg. SQRT function calculation in calculator.
Doing actions in order to modify future precepts-sometimes called information gathering- is an
important part of rationality.
A rational agent should be autonomous-it should learn from its own prior knowledge (experience).
nature of environments
Properties of Environment
An environment is everything in the world which surrounds the agent, but it is not a part
of an agent itself. An environment can be described as a situation in which an agent is present.
The environment is where agent lives, operate and provide the agent with something to sense and act
upon it.
2. Fully observable vs. Partially Observable:
• If an agent sensor can sense or access the complete state of an environment at each point of time then it
is a fully observable environment, else it is partially observable.
• A fully observable environment is easy as there is no need to maintain the internal state to keep track
history of the world.
Page 3 of 12
lOMoARcPSD|43239349
3. Deterministic vs Stochastic:
• If an agent's current state and selected action can completely determine the next state of the
environment, then such environment is called a deterministic environment.
• A stochastic environment is random in nature and cannot be determined completely by an
agent.
• In a deterministic, fully observable environment, agent does not need to worry about uncertainty.
4. Episodic vs Sequential:
• In an episodic environment, there is a series of one-shot actions, and only the current percept is
required for the action.
• However, in Sequential environment, an agent requires memory of past actions to determine
the next best actions.
5. Single-agent vs Multi-agent
• If only one agent is involved in an environment, and operating by itself then such an
environment is called single agent environment.
• However, if multiple agents are operating in an environment, then such an environment is
called a multi-agent environment.
The agent design problems in the multi-agent environment are different from single agent
environment.
6. Static vs Dynamic:
• If the environment can change itself while an agent is deliberating then such environment is
called a dynamic environment else it is called a static environment.
• Static environments are easy to deal because an agent does not need to continue looking at the world
while deciding for an action.
• However for dynamic environment, agents need to keep looking at the world at each action.
• Taxi driving is an example of a dynamic environment whereas Crossword puzzles are an
example of a static environment.
7. Discrete vs Continuous:
• If in an environment there are a finite number of percepts and actions that can be performed within
it, then such an environment is called a discrete environment else it is called continuous environment.
• A chess game comes under discrete environment as there is a finite number of moves that can be
performed.
• A self-driving car is an example of a continuous environment.
8. Known vs Unknown
• Known and unknown are not actually a feature of an environment, but it is an agent's state of
knowledge to perform an action.
• In a known environment, the results for all actions are known to the agent. While in unknown
environment, agent needs to learn how it works in order to perform an action.
• It is quite possible that a known environment to be partially observable and an Unknown
environment to be fully observable.
Page 4 of 12
lOMoARcPSD|43239349
Page 5 of 12
lOMoARcPSD|43239349
Task environments, which are essentially the "problems" to which rational agents are the "solutions."
Environment:
All the surrounding things and conditions of an agent fall in this section. It basically consists of all the things
under which the agents work.
Actuators:
The devices, hardware or software through which the agent performs any actions or processes any
information to produce a result are the actuators of the agent.
Sensors:
The devices through which the agent observes and perceives its environment are the sensors of the agent.
Search: Searching is a step by step procedure to solve a search-problem in a given search space. A
search problem can have three main factors:
1. Search Space: Search space represents a set of possible solutions, which a system may
have.
2. Start State: It is a state from where agent begins the search.
3. Goal test: It is a function which observe the current state and returns whether the goal state is
achieved or not.
Search tree: A tree representation of search problem is called Search tree. The root of the search
tree is the root node which is corresponding to the initial state.
Actions: It gives the description of all the available actions to the agent.
Transition model: A description of what each action do, can be represented as a transition model.
Path Cost: It is a function which assigns a numeric cost to each path.
Solution: It is an action sequence which leads from the start node to the goal node.
Optimal Solution: If a solution has the lowest cost among all solutions.
Page 6 of 12
lOMoARcPSD|43239349
1) Vaccum World
States : The state is determined by both the agent location and the dirt locations. The agent is in one of
the 2 locations, each of which might or might not contain dirt. Thus there are 2*2^2=8 possible world
states.
Initial state: Any state can be designated as the initial state.
Actions: In this simple environment, each state has just three actions: Left, Right, and Suck. Larger
environments might also include Up and Down.
Transition model: The actions have their expected effects, except that moving Left in the leftmost
square, moving Right in the rightmost square, and Sucking in a clean square have no effect. The complete
state space is shown in Figure.
Goal test: This checks whether all the squares are clean.
Path cost: Each step costs 1, so the path cost is the number of steps in the path.
2) 8- Puzzle Problem
States: A state description specifies the location of each of the eight tiles and the blank in one of the nine
squares.
Initial state: Any state can be designated as the initial state. Note that any given goal can be reached
from exactly half of the possible initial states.
Page 7 of 12
lOMoARcPSD|43239349
Actions: The simplest formulation defines the actions as movements of the blank space Left, Right, Up, or
Down. Different subsets of these are possible depending on where the blank is.
Transition model: Given a state and action, this returns the resulting state; for example, if we apply
Left to the start state in Figure 3.4, the resulting state has the 5 and the blank switched. Goal test:
This checks whether the state matches the goal configuration shown in Figure. Path cost: Each
step costs 1, so the path cost is the number of steps in the path.
3) 8 – Queens Problem:
search algorithms
o Search: Searching is a step by step procedure to solve a search-problem in a given search space. A search
problem can have three main factors:
a. Search Space: Search space represents a set of possible solutions, which a system may have.
b. Start State: It is a state from where agent begins the search.
c. Goal test: It is a function which observe the current state and returns whether the goal state is
achieved or not.
o Search tree: A tree representation of search problem is called Search tree. The root of the search tree is
the root node which is corresponding to the initial state.
o Actions: It gives the description of all the available actions to the agent.
o Transition model: A description of what each action do, can be represented as a transition model.
lOMoARcPSD|43239349
Page 8 of 12
lOMoARcPSD|43239349
Following are the four essential properties of search algorithms to compare the efficiency of these algorithms:
Completeness: A search algorithm is said to be complete if it guarantees to return a solution if at least any
solution exists for any random input.
Optimality: If a solution found for an algorithm is guaranteed to be the best solution (lowest path cost) among all
other solutions, then such a solution for is said to be an optimal solution.
Time Complexity: Time complexity is a measure of time for an algorithm to complete its task.
Space Complexity: It is the maximum storage space required at any point during the search, as the complexity of
the problem.
Based on the search problems we can classify the search algorithms into uninformed (Blind search) search
and informed search (Heuristic search) algorithms.
Page 9 of 12
lOMoARcPSD|43239349
Uninformed/Blind Search:
The uninformed search does not contain any domain knowledge such as closeness, the location of the goal. It
operates in a brute-force way as it only includes information about how to traverse the tree and how to identify
leaf and goal nodes. Uninformed search applies a way in which search tree is searched without any information
about the search space like initial state operators and test for the goal, so it is also called blind search.It examines
each node of the tree until it achieves the goal node.
o Breadth-first search
o Uniform cost search
o Depth-first search
o Iterative deepening depth-first search
o Bidirectional Search
Uninformed search is a class of general-purpose search algorithms which operates in brute force-way.
Uninformed search algorithms do not have additional information about state or search space other than
how to traverse the tree, so it is also called blind search.
1. Breadth-first Search
2. Depth-first Search
3. Depth-limited Search
4. Iterative deepening depth-first search
5. Uniform cost search
6. Bidirectional Search
1. Breadth-first Search:
o Breadth-first search is the most common search strategy for traversing a tree or graph. This algorithm
searches breadth wise in a tree or graph, so it is called breadth-first search.
o BFS algorithm starts searching from the root node of the tree and expands all successor node at the
current level before moving to nodes of next level.
o The breadth-first search algorithm is an example of a general-graph search algorithm.
o Breadth-first search implemented using FIFO queue data structure.
lOMoARcPSD|43239349
Disadvantages:
o It requires lots of memory since each level of the tree must be saved into memory to expand the next
level.
o BFS needs lots of time if the solution is far away from the root node.
Example:
In the below tree structure, we have shown the traversing of the tree using BFS algorithm from the root node S to
goal node K. BFS search algorithm traverse in layers, so it will follow the path which is shown by the dotted
arrow, and the traversed path will be:
Time Complexity: Time Complexity of BFS algorithm can be obtained by the number of nodes traversed in BFS
until the shallowest Node. Where the d= depth of shallowest solution and b is a node at every state.
Space Complexity: Space complexity of BFS algorithm is given by the Memory size of frontier which is O(bd).
Completeness: BFS is complete, which means if the shallowest goal node is at some finite depth, then BFS will
find a solution.
Optimality: BFS is optimal if path cost is a non-decreasing function of the depth of the node.
lOMoARcPSD|43239349
2. Depth-first Search
o Depth-first search isa recursive algorithm for traversing a tree or graph data structure.
o It is called the depth-first search because it starts from the root node and follows each path to its greatest
depth node before moving to the next path.
o DFS uses a stack data structure for its implementation.
o The process of the DFS algorithm is similar to the BFS algorithm.
Note: Backtracking is an algorithm technique for finding all possible solutions using recursion.
Advantage:
o DFS requires very less memory as it only needs to store a stack of the nodes on the path from root node to
the current node.
o It takes less time to reach to the goal node than BFS algorithm (if it traverses in the right path).
Disadvantage:
o There is the possibility that many states keep re-occurring, and there is no guarantee of finding the
solution.
o DFS algorithm goes for deep down searching and sometime it may go to the infinite loop.
Example:
In the below search tree, we have shown the flow of depth-first search, and it will follow the order
It will start searching from root node S, and traverse A, then B, then D and E, after traversing E, it will backtrack
the tree as E has no other successor and still goal node is not found. After backtracking it will traverse node C
and then G, and here it will terminate as it found goal node.
lOMoARcPSD|43239349
Page 12 of
lOMoARcPSD|43239349
Time Complexity: Time complexity of DFS will be equivalent to the node traversed by the algorithm. It is given
by:
Where, m= maximum depth of any node and this can be much larger than d (Shallowest solution depth)
Space Complexity: DFS algorithm needs to store only single path from the root node, hence space complexity of
DFS is equivalent to the size of the fringe set, which is O(bm).
Optimal: DFS search algorithm is non-optimal, as it may generate a large number of steps or high cost to reach
to the goal node.
A depth-limited search algorithm is similar to depth-first search with a predetermined limit. Depth-limited search
can solve the drawback of the infinite path in the Depth-first search. In this algorithm, the node at the depth limit
will treat as it has no successor nodes further.
o Standard failure value: It indicates that problem does not have any solution.
o Cutoff failure value: It defines no solution for the problem within a given depth limit.
Advantages:
Disadvantages:
Example:
Page 13 of
lOMoARcPSD|43239349
Completeness: DLS search algorithm is complete if the solution is above the depth-limit.
Optimal: Depth-limited search can be viewed as a special case of DFS, and it is also not optimal even if ℓ>d
Uniform-cost search is a searching algorithm used for traversing a weighted tree or graph. This algorithm comes
into play when a different cost is available for each edge. The primary goal of the uniform-cost search is to find a
path to the goal node which has the lowest cumulative cost. Uniform-cost search expands nodes according to
their path costs form the root node. It can be used to solve any graph/tree where the optimal cost is in demand. A
uniform-cost search algorithm is implemented by the priority queue. It gives maximum priority to the lowest
cumulative cost. Uniform cost search is equivalent to BFS algorithm if the path cost of all edges is the same
Advantages:
o Uniform cost search is optimal because at every state the path with the least cost is chosen.
Disadvantages:
o It does not care about the number of steps involve in searching and only concerned about path cost. Due
to which this algorithm may be stuck in an infinite loop.
Page 14 of
lOMoARcPSD|43239349
Example:
Completeness:
Uniform-cost search is complete, such as if there is a solution, UCS will find it.
Time Complexity:
Let C* is Cost of the optimal solution, and ε is each step to get closer to the goal node. Then the number of
steps is = C*/ε+1. Here we have taken +1, as we start from state 0 and end to C*/ε.
Space Complexity:
The same logic is for space complexity so, the worst-case space complexity of Uniform-cost search is O(b1 +
[C*/ε]
).
Optimal: Uniform-cost search is always optimal as it only selects a path with the lowest path cost.
The iterative deepening algorithm is a combination of DFS and BFS algorithms. This search algorithm finds out
the best depth limit and does it by gradually increasing the limit until a goal is found.
This algorithm performs depth-first search up to a certain "depth limit", and it keeps increasing the depth limit
after each iteration until the goal node is found.
This Search algorithm combines the benefits of Breadth-first search's fast search and depth-first search's memory
efficiency.
The iterative search algorithm is useful uninformed search when search space is large, and depth of goal node is
unknown.
lOMoARcPSD|43239349
o It combines the benefits of BFS and DFS search algorithm in terms of fast search and memory efficiency.
Disadvantages:
o The main drawback of IDDFS is that it repeats all the work of the previous phase.
Example:
Following tree structure is showing the iterative deepening depth-first search. IDDFS algorithm performs various
iterations until it does not find the goal node. The iteration performed by the algorithm is given as:
Completeness:
Time Complexity:
Let's suppose b is the branching factor and depth is d then the worst-case time complexity is O(bd).
Space Complexity:
Optimal:
IDDFS algorithm is optimal if path cost is a non- decreasing function of the depth of the node.
Page 16 of
lOMoARcPSD|43239349
Bidirectional search can use search techniques such as BFS, DFS, DLS, etc.
Advantages:
Disadvantages:
Example:
In the below search tree, bidirectional search algorithm is applied. This algorithm divides one graph/tree into two
sub-graphs. It starts traversing from node 1 in the forward direction and starts from goal node 16 in the backward
direction.
O(bd).