Unit I
Unit I
Here’s a brief timeline of the past six decades of how AI evolved from its
inception.
1956 - John McCarthy coined the term ‘artificial intelligence’ and had the first
AI conference.
1969 - Shakey was the first general-purpose mobile robot built. It is now able
to do things with a purpose vs. just a list of instructions.
1997 - Supercomputer ‘Deep Blue’ was designed, and it defeated the world
champion chess player in a match. It was a massive milestone by IBM to
create this large computer.
1. Purely Reactive
These machines do not have any memory or data to work with, specializing
in just one field of work. For example, in a chess game, the machine
observes the moves and makes the best possible decision to win.
2. Limited Memory
3. Theory of Mind
4. Self-Aware
artificial intelligence. There are various search algorithms in AI. This article
The AI agents find the best solution for the problem by searching for all the
possible alternatives or solutions. The process of searching is done using
search algorithms.
Search algorithms work in two main phases: defining the problem and
searching in the search space.
Initial state: This is the start state in which the search starts.
State space: These are all the possible states that can be attained
from the initial state through a series of actions.
Actions: These are the steps, activities, or operations undertaken by
AI agents in a particular state.
Goal state: This is the endpoint or the desired state.
Goal test: This is a test conducted to establish whether a particular
state is a goal state.
Path cost: This is the cost associated with a given path taken by the
agents.
Advantage:
o DFS requires very less memory as it only needs to store a stack of the
nodes on the path from root node to the current node.
o It takes less time to reach to the goal node than BFS algorithm (if it
traverses in the right path).
Disadvantage:
o There is the possibility that many states keep re-occurring, and there is
no guarantee of finding the solution.
o DFS algorithm goes for deep down searching and sometime it may go
to the infinite loop.
Example:
In the below search tree, we have shown the flow of depth-first search, and it
will follow the order as:
It will start searching from root node S, and traverse A, then B, then D and E,
after traversing E, it will backtrack the tree as E has no other successor and
still goal node is not found. After backtracking it will traverse node C and
then G, and here it will terminate as it found goal node.
Completeness: DFS search algorithm is complete within finite state space
as it will expand every node within a limited search tree.
Where, m= maximum depth of any node and this can be much larger
than d (Shallowest solution depth)
Space Complexity: DFS algorithm needs to store only single path from the
root node, hence space complexity of DFS is equivalent to the size of the
fringe set, which is O(bm).
Once the algorithm visits and marks the starting node, then it moves
towards the nearest unvisited nodes and analyses them. Once visited, all
nodes are marked. These iterations continue until all the nodes of the graph
have been successfully visited and marked.
1. In the various levels of the data, you can mark any node as the starting
or initial node to begin traversing. The BFS will visit the node and mark
it as visited and places it in the queue.
2. Now the BFS will visit the nearest and un-visited nodes and marks
them. These values are also added to the queue. The queue works on
the FIFO model.
3. In a similar manner, the remaining nearest and un-visited nodes on the
graph are analyzed marked and added to the queue. These items are
deleted from the queue as receive and printed as the result.
Why do we need BFS Algorithm?
There are numerous reasons to utilize the BFS Algorithm to use as searching
for your dataset. Some of the most vital aspects that make this algorithm
your first choice are:
BFS is useful for analyzing the nodes in a graph and constructing the
shortest path of traversing through these.
BFS can traverse through a graph in the smallest number of iterations.
The architecture of the BFS algorithm is simple and robust.
The result of the BFS algorithm holds a high level of accuracy in
comparison to other algorithms.
BFS iterations are seamless, and there is no possibility of this algorithm
getting caught up in an infinite loop problem.
The evaluation is carried out by the heuristic function as all the solutions
are generated systematically in generate and test algorithm but if there are
some paths which are most unlikely to lead us to result then they are not
considered. The heuristic does this by ranking all the alternatives and is
often effective in doing so. Systematic Generate and Test may prove to be
ineffective while solving complex problems. But there is a technique to
improve in complex cases as well by combining generate and test search
with other techniques so as to reduce the search space. For example in
Artificial Intelligence Program DENDRAL we make use of two techniques,
the first one is Constraint Satisfaction Techniques followed by Generate and
Test Procedure to work on reduced search space i.e. yield an effective
result by working on a lesser number of lists generated in the very first
step.
Algorithm
1. Hill Climbing can get stuck in local optima, meaning that it may not find
the global optimum of the problem.
2. The algorithm is sensitive to the choice of initial solution, and a poor
initial solution may result in a poor final solution.
3. Hill Climbing does not explore the search space very thoroughly, which
can limit its ability to find better solutions.
4. It may be less effective than other optimization algorithms, such as
genetic algorithms or simulated annealing, for certain types of problems.
Hill Climbing is a heuristic search used for mathematical optimization
problems in the field of Artificial Intelligence.
Given a large set of inputs and a good heuristic function, it tries to find a
sufficiently good solution to the problem. This solution may not be the
global optimal maximum.
In the above definition, mathematical optimization problems imply
that hill-climbing solves the problems where we need to maximize or
minimize a given real function by choosing values from the given inputs.
Example-Travelling salesman problem where we need to minimize the
distance traveled by the salesman.
‘Heuristic search’ means that this search algorithm may not find the
optimal solution to the problem. However, it will give a good solution in
a reasonable time.
A heuristic function is a function that will rank all the possible
alternatives at any branching step in the search algorithm based on the
available information. It helps the algorithm to select the best route out
of possible routes.
Features of Hill Climbing
1. Variant of generating and test algorithm:
It is a variant of generating and testing algorithms. The generate and test
algorithm is as follows:
Generate possible solutions.
Test to see if this is the expected solution.
If the solution has been found quit else go to step 1.
Hence we call Hill climbing a variant of generating and test algorithm as it
takes the feedback from the test procedure. Then this feedback is utilized
by the generator in deciding the next move in the search space.
2. Uses the Greedy approach:
At any point in state space, the search moves in that direction only which
optimizes the cost of function with the hope of finding the optimal solution
at the end.
1. Introduction
The best-first search is a class of search algorithms aiming to find
the shortest path from a given starting node to a goal node in a
graph using heuristics. The AO* algorithm is an example of the best-
first search class. In this tutorial, we’ll discuss how it works.
This class should not be mistaken for breadth-first search, which is just an
algorithm and not a class of algorithms. Moreover, the best-first search
algorithms are informed, which means they have prior knowledge about the
distances from heuristics. However, the breadth-first search algorithm is
uninformed.
Introduction:
AO* algorithm is a best first search algorithm. AO* algorithm uses the
concept of AND-OR graphs to decompose any complex problem given into
smaller set of problems which are further solved. AND-OR graphs are
specialized graphs that are used in problems that can be broken down into
sub problems where AND side of the graph represent a set of task that need
to be done to achieve the main goal , whereas the or side of the graph
represent the different ways of performing task to achieve the same main
goal.
Working of AO algorithm:
The AO* algorithm works on the formula given belowx:
f(n) = g(n) + h(n)
where,
g(n): The actual cost of traversal from initial state to the current state.
h(n): The estimated cost of traversal from the current state to the goal state.
f(n): The actual cost of traversal from the initial state to the goal state.
Greedy best-first search algorithm always selects the path which appears
best at that moment. It is the combination of depth-first search and breadth-
first search algorithms. It uses the heuristic function and search. Best-first
search allows us to take the advantages of both algorithms. With the help of
best-first search, at each step, we can choose the most promising node. In
the best first search algorithm, we expand the node which is closest to the
goal node and the closest cost is estimated by heuristic function, i.e.
1. f(n)= g(n).
Advantages:
o Best first search can switch between BFS and DFS by gaining the
advantages of both the algorithms.
o This algorithm is more efficient than BFS and DFS algorithms.
Disadvantages:
Example:
Consider the below search problem, and we will traverse it using greedy
best-first search. At each iteration, each node is expanded using evaluation
function f(n)=h(n) , which is given in the below table.
In this search example, we are using two lists which
are OPEN and CLOSED Lists. Following are the iteration for traversing the
above example.
Time Complexity: The worst case time complexity of Greedy best first
search is O(bm).
Space Complexity: The worst case space complexity of Greedy best first
search is O(bm). Where, m is the maximum depth of the search space.
At each point in the search space, only those node is expanded which have
the lowest value of f(n), and the algorithm terminates when the goal node is
found.
Algorithm of A* search:
Step 2: Check if the OPEN list is empty or not, if the list is empty then return
failure and stops.
Step 3: Select the node from the OPEN list which has the smallest value of
evaluation function (g+h), if node n is goal node then return success and
stop, otherwise
Step 4: Expand node n and generate all of its successors, and put n into the
closed list. For each successor n', check whether n' is already in the OPEN or
CLOSED list, if not then compute evaluation function for n' and place into
Open list.
Step 5: Else if node n' is already in OPEN and CLOSED, then it should be
attached to the back pointer which reflects the lowest g(n') value.
Advantages:
Disadvantages:
Example:
In this example, we will traverse the given graph using the A* algorithm. The
heuristic value of all states is given in the below table so we will calculate the
f(n) of each state using the formula f(n)= g(n) + h(n), where g(n) is the cost
to reach any node from start state.
Here we will use OPEN and CLOSED list.
Solution:
Points to remember:
o A* algorithm returns the path which occurred first, and it does not
search for all remaining paths.
o The efficiency of A* algorithm depends on the quality of heuristic.
o A* algorithm expands all nodes which satisfy the condition f(n)<=""
li="">
If the heuristic function is admissible, then A* tree search will always find the
least cost path.