UNIT 1 AI Notes
UNIT 1 AI Notes
UNIT – I :
Problem solving by search -I: Introduction to AI , Intelligent Agents.
Problem solving by search -II: Problem-Solving Agents,
Introduction:
Artificial Intelligence is concerned with the design of intelligence in an artificial device
or machine to solve by logically, reasonably within time without error accurately,
efficiently to provide solution with respective problem .
The term was coined by John McCarthy in 1956. It is the ability to acquire,
understand and apply the knowledge to achieve goals in the world.
AI is the study of the mental faculties through the use of computational models and
AI is the study of intellectual/mental processes as computational processes.
AI program will demonstrate a high level of intelligence to a degree that equals or
exceeds the intelligence required of a human in performing some task.
AI is unique, sharing borders with Mathematics, Computer Science, Philosophy,
Psychology, Biology, Cognitive Science and many others. Although there is no clear
definition of AI or even Intelligence, it can be described as an attempt to build machines
that like humans can think and act, able to learn and use knowledge to solve problems
on their own.
Manufacturing :
Some Automotive industries are using AI to provide virtual assistant to their user for better
performance. Such as Tesla has introduced Tesla Bot, an intelligent virtual assistant. Various
Industries are currently working for developing self-driven cars which can make your journey more
safe and secure.
Building of AI Systems:
1) Perception:
Intelligent biological systems are physically embodied in the world and experience the world
through their sensors (senses).
For an autonomous vehicle, input might be images from a camera and range information from
a rangefinder.
2) Reasoning :
Inference, decision-making, classification from what is sensed and what the internal "model"
is of the world. Includes areas of knowledge representation, problem solving, decision theory,
planning, game theory, machine learning, uncertainty reasoning, etc.
3) Action:
Biological systems interact within their environment by actuation, speech, etc. All behavior is
centered around actions in the world. Includes areas of robot actuation, natural language
generation, and speech synthesis.
Intelligent Systems:
In order to design intelligent systems, it is important to categorize them into 4 categories such as.
1. Systems that think like humans
2. Systems that think rationally
3. Systems that behave like humans
4. Systems that behave rationally
Intelligent Agents:
1. Human-Agent: A Human Agent has eyes, ears, and other organs which work for
sensors and hand, legs, vocal tract work for actuators.
2. Robotic Agent: A Robotic Agent can have cameras, infrared range finder, NLP for
sensors and various motors for actuators.
3. Software Agent: Software Agent can have keystrokes, file contents as sensory
input and act on those inputs and display output on the screen.
Hence the world around us is full of agents such as thermostat, cellphone, camera, and
even we are also agents.
Sensor: Sensor is a device which detects the change in the environment and sends the
information to other electronic devices. An agent observes its environment through
sensors.
Actuators: Actuators are the component of machines that converts energy into motion.
The actuators are only responsible for moving and controlling a system. An actuator can
be an electric motor, gears, rails, etc.
Effectors: Effectors are the devices which affect the environment. Effectors can be legs,
wheels, arms, fingers, wings, fins, and display screen.
Structure of an AI Agent
The task of AI is to design an agent program which implements the agent function.
The structure of an intelligent agent is a combination of architecture and agent program. It
can be viewed as:
• The Simple reflex agents are the simplest agents. These agents take decisions
on the basis of the current percepts and ignore the rest of the percept history.
• These agents only succeed in the fully observable environment.
• The Simple reflex agent works on Condition-action rule, which means it maps
the current state to action. Such as a Room Cleaner agent, it works only if there
is dirt in the room.
• Problems for the simple reflex agent design approach:
• The Model-based agent can work in a partially observable environment, and track the
situation.
• A model-based agent has 2 important factors:
1) Model: It is knowledge about "how things happen in the world," so it is called a
Model-based agent.
2) Internal State: It is a representation of the current state based on percept
history.
• These agents have the model, "which is knowledge of the world" and based on the
model they perform actions.
• Updating the agent state requires information about:
• How the world evolves
• How the agent's action affects the world.
3.Goal-based agents:
• The knowledge of the current state environment is not always sufficient to decide for an
agent to what to do. needs to know its goal which describes desirable situations.
4. Utility-based agents:
These agents are similar to the goal-based agent but provide an extra component of utility
measurement which makes them different by providing a measure of success at a given
state.
• Utility-based agent act based not only goals but also the best way to achieve the goal.
• The Utility-based agent is useful when there are multiple possible alternatives, and an
agent has to choose in order to perform the best action.
• The utility function maps each state to a real number to check how efficiently each
action achieves the goals.
5. Learning Agents
• A learning agent in AI is the type of agent which can learn from its past experiences,
or it has learning capabilities.
• It starts to act with basic knowledge and then able to act and adapt automatically
through learning.
• A learning agent has mainly four conceptual components, which are:
a. Learning element: It is responsible for making improvements by learning
from environment
b. Critic: Learning element takes feedback from critic which describes that how
well the agent is doing with respect to a fixed performance standard.
c. Performance element: It is responsible for selecting external action
d. Problem generator: This component is responsible for suggesting actions that will
lead to new and informative experiences.
• Based on the search problems we can classify the search algorithms into
uninformed (Blind search) search and informed search (Heuristic search)
algorithms.
The search does not contain any domain knowledge such as closeness, the
location of the goal. It operates in a brute-force way as it only includes information about how to
traverse the tree and how to identify leaf and goal nodes is called Uninformed Search.
Uninformed search applies a way in which search tree is searched without any
information about the search space like initial state operators and test for the goal, so it
is also called Blind Search. It examines each node of the tree until it achieves the goal
node. It Can Be Divided Into 6 Main Types:
1. Breadth-first Search
2. Depth-first Search
3. Depth-limited Search
4. Iterative deepening depth-first search
5. Uniform cost search
6. Bidirectional Search
1. Breadth-first Search:
Breadth-first search is the most common search strategy for traversing a tree
or graph. This algorithm searches breadthwise in a tree or graph, so it is
called breadth- first search.
BFS algorithm starts searching from the root node of the tree and expands
all successor node at the current level before moving to nodes of next level.
The breadth-first search algorithm is an example of a general-graph search
algorithm.
Breadth-first search implemented using FIFO queue data structure.
Advantages:
BFS will provide a solution if any solution exists.
If there are more than one solutions for a given problem, then BFS will provide
the minimal solution which requires the least number of steps.
Disadvantages:
* It requires lots of memory since each level of the tree must be saved into
memory to expand the next level.
BFS needs lots of time if the solution is far away from the root node.
Algorithm of Bread first search:
1. Create a variable called NODE-LIST and set it to initial state
2. Until a goal state is found or NODE-LIST is empty do
a. Remove the first element from NODE-LIST and call it E.
If NODE-LIST was empty, Quit
b. For each way that each rule can match the state described in E
do:
i. Apply the rule to generate a new state
ii. If the new state is a Goal state, quit and return this state
iii. Otherwise, add the new state to the end of NODE-LIST
Path to goal : S A B C D G H E F I K
2. Depth-First Search:
The search starts from the root node and follows each path to its greatest depth
node before moving to the next path , recursively for traversing a tree or graph data
structure is called the Depth-First Search.
Advantage:
* DFS requires very less memory as it only needs to store a stack of the nodes on
the path from root node to the current node.
* It takes less time to reach to the goal node than BFS algorithm (if it traverses in
the right path).
Disadvantages:
* There is the possibility that many states keep re-occurring, and there is no
guarantee of finding the solution.
It will start searching from root node S, and traverse A, then B, then D and E,
after traversing E, it will backtrack the tree as E has no other successor and still goal
node is not found. After backtracking it will traverse node C and then G, and here it will
terminate as it found goal node.
Advantages:
Uniform cost search is optimal because at every state the path with the least
cost is chosen.
Disadvantages:
It does not care about the number of steps involve in searching and only
concerned about path cost. Due to which this algorithm may be stuck in an
infinite loop.
Example:
Advantages: It combines the benefits of BFS and DFS search algorithm in terms of fast
searchand memory efficiency.
Disadvantages: The main drawback of IDDFS is that it repeats all the work of
theprevious phase.
Algorithm:
Properties :
Completeness: This algorithm is complete is ifthe branching factor is finite.
Time Complexity: b is the branching factor and depth is d then the worst-case
time complexity is O(bd).
Space Complexity:The space complexity of IDDFS will be O(bd).
Optimal: IDDFS algorithm is optimal if path cost is a non- decreasing function of
the depth of the node.
Example:
In the below search tree, Bidirectional Search Algorithm is applied. This algorithm
divides one graph/tree into two sub-graphs.
It starts traversing from node 1 in the forward direction to goal
It starts from goal node 16 in the backward direction to node
Both Intersect at node 9 where two searches meet and terminate search.
Step 1: A is the initial node and 1 is the goal node, and 9 is the intersection node.
Step 2: Start searching simultaneously From Start To Goal Node and backward
fromGoal To Start Node.
Path to Goal: 1 4 8 9 10 12 16
Properties :
Completeness: Bidirectional Search is complete if we use BFS in both searches.
Time Complexity: Time complexity of bidirectional search using BFS is O(bd).
Space Complexity: Space complexity of bidirectional search is O(bd).
Optimal: Bidirectional search is Optimal.
This knowledge help agents to explore less to the search space and find more
efficiently the goal node.
The informed search algorithm is more useful for large search space.
Informed search algorithm uses the idea of heuristic, so it is also called Heuristic
search.
Heuristics function: Heuristic is a function which is used in Informed Search, and it finds
the most promising path. It takes the current state of the agent as its input and produces
the estimation of how close agent is from the goal. The heuristic method, however, might
not always give the best solution, but it guaranteed to find a good solution in reasonable
time. Heuristic function estimates how close a state is to the goal.
It is represented by h(n), and it calculates the cost of an optimal path between the
pair of states. The value of the heuristic function is always positive.
Admissibility of the heuristic function is given as:
h(n) <= h*(n)
Pure Heuristic Search: Pure heuristic search is the simplest form of heuristic search
algorithms. It expands nodes based on their heuristic value h(n). It maintains two lists,
OPEN and CLOSED list. In the CLOSED list, it places those nodes which have already
expanded and in the OPEN list, it places nodes which have yet not been expanded.
On each iteration, each node n with the lowest heuristic value is expanded and
generates all its successors and n is placed to the closed list. The algorithm continues unit
a goal state is found.
In the informed search we will discuss two main algorithms which are given below:
Best-first search allows us to take the advantages of both algorithms. With the help
of best-first search, at each step, we can choose the most promising node. In the best
first search algorithm, we expand the node which is closest to the goal node and the
closest cost is estimated by heuristic function, i.e.
1. f(n)= g(n).
Advantages:
Best first search can switch between BFS and DFS by gaining the
advantagesof both the algorithms.
This algorithm is more efficient than BFS and DFS algorithms.
Disadvantages:
It can behave as an unguided depth-first search in the worst case scenario.
It can get stuck in a loop as DFS.
This algorithm is not optimal.
Example: Consider the below search problem, and we will traverse it using greedy best-
first search. At each iteration, each node is expanded using evaluation function f(n)=h(n)
, which is given in the below table.
In this search example, we are using two lists which are OPEN and CLOSED Lists.
Following are the iteration for traversing the above example.
Time Complexity: The worst case time complexity of Greedy best first search is O(bm).
Space Complexity: The worst case space complexity of Greedy best first search is
O(bm). Where, m is the maximum depth of the search space.
Complete: Greedy best-first search is also incomplete, even if the given state space is
finite.
Optimal: Greedy best first search algorithm is not optimal.
2. A* Search Algorithm:
A* search is the most commonly known form of best-first search. It uses heuristic
function h(n), and cost to reach the node n from the start state g(n). It has combined
features of UCS and greedy best-first search, by which it solve the problem efficiently. A*
search algorithm finds the shortest path through the search space using the heuristic
function. This search algorithm expands less search tree and provides optimal result
faster. A* algorithm is similar to UCS except that it uses g(n)+h(n) instead of g(n).
In A* search algorithm, we use search heuristic as well as the cost to reach the
node. Hence we can combine both costs as following, and this sum is called as a fitness
number.
Algorithm of A* search:
Advantages:
A* search algorithm is the best algorithm than other search algorithms.
A* search algorithm is optimal and complete.
This algorithm can solve very complex problems.
Disadvantages:
It does not always produce the shortest path as it mostly based on
heuristicsand approximation.
A* search algorithm has some complexity issues.
The main drawback of A* is memory requirement as it keeps all generated
nodes in the memory, so it is not practical for various large-scale problems.
Example:
In this example, we will traverse the given graph using the A* algorithm. The
heuristic value of all states is given in the below table so we will calculate the f(n) of each
state using the formula f(n)= g(n) + h(n), where g(n) is the cost to reach any node from
start state.
Here we will use OPEN and CLOSED list.
Points to remember:
A* algorithm returns the path which occurred first, and it does not search for
all remaining paths.
The efficiency of A* algorithm depends on the quality of heuristic.
A* algorithm expands all nodes which satisfy the condition f(n)<="" li="">
Properties :
Complete: A* algorithm is complete as long as: Branching factor is finite. Cost at every
action is fixed.
Optimal: A* search algorithm is optimal if it follows below two conditions:
Admissible: the first condition requires for optimality is that h(n) should be
an admissible heuristic for A* tree search. An admissible heuristic is optimistic
in nature.
Consistency: Second required condition is consistency for only A* graph-search.
If the heuristic function is admissible, then A* tree search will always find the least
costpath.
AI Notes prepared by Ravindranath
Time Complexity: The time complexity of A* search algorithm depends on heuristic
function, and the number of nodes expanded is exponential to the depth of solution d. So
the time complexity is O(b^d), where b is the branching factor.
Local Maximum: Local maximum is a state which is better than its neighbor
states, but there is also another state which is higher than it.
Global Maximum: Global maximum is the best possible state of state space
landscape. It has the highest value of objective function.
Current state: It is a state in a landscape diagram where an agent is
currentlypresent.
Flat local maximum: It is a flat space in the landscape where all the neighbor
states of current states have the same value.
Shoulder: It is a plateau region which has an uphill edge.
2. Plateau: A plateau is the flat area of the search space in which all the neighbor
states of the current state contains the same value, because of this algorithm does not
find any best direction to move. A hill-climbing search might be lost in the plateau
area.Solution: The solution for the plateau is to take big steps or very little steps while
searching, to solve the problem. Randomly select a state which is far away from the
current state so it is possible that the algorithm could find non-plateau region.
3. Ridges: A ridge is a special form of the local maximum. It has an area which is
higher than its surrounding areas, but itself has a slope, and cannot be reached in
asingle move.
Solution: With the use of bidirectional search, or by moving in different directions,
wecan improve this problem.
Newton-Raphson method:
Nondeterministic problems:
Transition model is defined by RESULTS function that returns a set of possible
outcome states;
Solution is not a sequence but a contingency plan (strategy),
e.g.
Suck, if State = 5 then [Right, Suck] else []];
In nondeterministic environments, agents can apply AND-OR search to
generate contingent plans that reach the goal regardless of which outcomes occur
during execution.
AND-OR tree: OR nodes and AND nodes alternate. States nodes are OR
nodes where some action must be chosen. At the AND nodes (shown as circles),
every outcome must be handled.
A solution for an AND-OR search problem is a subtree that
1) has a goal node at every leaf.
2) specifies one action at each of its OR nodes.
3) includes every outcome branch at each of its AND nodes.
Solution path nodes: 1, 5, 6, 8
Belief state: The agent's current belief about the possible physical states it
might be in, given the sequence of actions and percepts up to that point.
Belief states: Contains every possible set of physical states. If P has N states, the
sensor less problem has up to 2^N states
Transition model:
For deterministic actions,
b' = RESULT(b,a) = {s' : s' = RESULTP(s,a) and s∈b}. (b' is never larger than b).
Path cost: If an action sequence is a solution for a belief state b, it is also a solution
for any subset of b. Hence, we can discard a path reaching the superset if the subset
has already been generated. Conversely, if the superset has already been generated
and found to be solvable, then any subset is guaranteed to be solvable.
The prediction stage is the same as for sensor less problem, given the action a
in belief state b, the predicted belief state is
The observation prediction stage determines the set of percepts o that could be
observed in the predicted belief state:
In conclusion:
RESULTS(b,a) = { bo : bo = UPDATE(PREDICT(b,a),o) and o∈POSSIBLE-
PERCEPTS(PREDICT(b,a))}
Search algorithm return a conditional plan that test the belief state rather than
the actual state.
Given an initial state b, an action a, and a percept o, the new belief state is
b’ = UPDATE(PREDICT(b, a), o). //recursive state estimator
Sate estimation: a.k.a. monitoring or filtering, a core function of intelligent system in
partially observable environments—maintaining one’s belief state.
Safely explorable: some goal state is reachable from every reachable state. E.g.
state spaces with reversible actions such as mazes and 8-puzzles.
No bounded competitive ratio can be guaranteed even in safely explorable
environments if there are paths of unbounded cost.
ONLINE-DFS-AGENT works only in state spaces where the actions are reversible.
RESULT: a table the agent stores its map, RESULT[s, a] records the state resulting
from executing action a in state s.
Whenever an action from the current sate has not been explored, the agent tries that
action.
When the agent has tried all the actions in a state, the agent in an online
searchbacktracks physically (in a depth-first search, means going back to the state
from which the agent most recently entered the current sate).
Basic idea of online agent: Random walk will eventually find a goal or complete its
exploration if the space is finite, but can be very slow. A more effective approach is to
store a “current best estimate” H(s) of the cost to reach the goal from each state that
has been visited. H(s) starts out being the heuristic estimate h(s) and is updated as the
agent gains experience in the state space.
If the agent is stuck in a flat local minimum, the agent will follow what seems
tobe the best path to the goal given the current cost estimates for its neighbours.
The estimate cost to reach the goal through a neighbor s’ is the cost to get to s’ plus
the estimated cost to get to a goal from there, that is, c(s, a, s') + H(s').
LRTA*: learning real-time A*. It builds a map of the environment in the result table,
update the cost estimate for the state it has just left and then chooses the “apparently
best” move according to its current cost estimates.
Actions that have not yet been tried in a state s are always assumed to lead
immediately to the goal with the least possible cost (a.k.a.h(s)), this optimism
underuncertainty encourages the agent to explore new, possible promising paths.