AI Unit 1
AI Unit 1
Philosophy of AI
While exploiting the power of the computer systems, the curiosity of human, lead him to wonder, “Can
a machine think and behave like humans do?”
Thus, the development of AI started with the intention of creating similar intelligence in machines that
we find and regard high in humans.
Goals of AI
• To Create Expert Systems − The systems which exhibit intelligent behavior, learn,
demonstrate, explain, and advice its users.
"It is a branch of computer science by which we can create intelligent machines which can behave like
a human, think like humans, and able to make decisions."
Artificial Intelligence exists when a machine can have human based skills such as learning, reasoning,
and solving problems
With Artificial Intelligence you do not need to preprogram a machine to do some work, despite that
you can create a machine with programmed algorithms which can work with own intelligence, and
that is the awesomeness of AI.
It is believed that AI is not a new technology, and some people says that as per Greek myth, there were
Mechanical men in early days which can work and behave like humans.
o With the help of AI, you can create such software or devices which can solve real-world
problems very easily and with accuracy such as health issues, marketing, traffic issues, etc.
o With the help of AI, you can create your personal virtual Assistant, such as Cortana, Google
Assistant, Siri, etc.
o With the help of AI, you can build such Robots which can work in an environment where
survival of humans can be at risk.
o AI opens a path for other new technologies, new devices, and new Opportunities.
What Contributes to AI?
Artificial intelligence is a science and technology based on disciplines such as Computer Science,
Biology, Psychology, Linguistics, Mathematics, and Engineering. A major thrust of AI is in the
development of computer functions associated with human intelligence, such as reasoning, learning,
and problem solving.
Out of the following areas, one or multiple areas can contribute to build an intelligent system.
What is AI Technique?
In the real world, the knowledge has some unwelcomed properties −
Applications of AI
• Gaming − AI plays crucial role in strategic games such as chess, poker, tic-tac-toe, etc., where
machine can think of large number of possible positions based on heuristic knowledge.
• Natural Language Processing − It is possible to interact with the computer that understands
natural language spoken by humans.
• Expert Systems − There are some applications which integrate machine, software, and special
information to impart reasoning and advising. They provide explanation and advice to the users.
• Vision Systems − These systems understand, interpret, and comprehend visual input on the
computer. For example, o A spying aeroplane takes photographs, which are used to figure out
spatial information or map of the areas.
o Doctors use clinical expert system to diagnose the patient. o Police use computer
software that can recognize the face of criminal with the stored portrait made by forensic
artist.
• Speech Recognition − Some intelligent systems are capable of hearing and comprehending the
language in terms of sentences and their meanings while a human talks to it. It can handle
different accents, slang words, noise in the background, change in human’s noise due to cold,
etc.
• Handwriting Recognition − The handwriting recognition software reads the text written on
paper by a pen or on screen by a stylus. It can recognize the shapes of the letters and convert it
into editable text.
• Intelligent Robots − Robots are able to perform the tasks given by a human. They have sensors
to detect physical data from the real world such as light, heat, temperature, movement, sound,
bump, and pressure. They have efficient processors, multiple sensors and huge memory, to
exhibit intelligence. In addition, they are capable of learning from their mistakes and they can
adapt to the new environment.
History of AI
1923
Karel Čapek play named “Rossum's Universal Robots” (RUR) opens in London, first use of
the word "robot" in English.
1945 Isaac Asimov, a Columbia University alumni, coined the term Robotics.
Alan Turing introduced Turing Test for evaluation of intelligence and published Computing
1950 Machinery and Intelligence. Claude Shannon published Detailed Analysis of Chess Playing
as a search.
John McCarthy coined the term Artificial Intelligence. Demonstration of the first running AI
1956
program at Carnegie Mellon University.
Danny Bobrow's dissertation at MIT showed that computers can understand natural language
1964
well enough to solve algebra word problems correctly.
Joseph Weizenbaum at MIT built ELIZA, an interactive problem that carries on a dialogue
1965
in English.
The Assembly Robotics group at Edinburgh University built Freddy, the Famous Scottish
1973
Robot, capable of using vision to locate and assemble models.
1979 The first computer-controlled autonomous vehicle, Stanford Cart, was built.
1985 Harold Cohen created and demonstrated the drawing program, Aaron.
1997 The Deep Blue Chess Program beats the then world chess champion, Garry Kasparov.
Interactive robot pets become commercially available. MIT displays Kismet, a robot with a
2000 face that expresses emotions. The robot Nomad explores remote regions of Antarctica and
locates meteorites.
o High Accuracy with less errors: AI machines or systems are prone to less errors and high
accuracy as it takes decisions as per pre-experience or information.
o High-Speed: AI systems can be of very high-speed and fast-decision making, because of that
AI systems can beat a chess champion in the Chess game.
o High reliability: AI machines are highly reliable and can perform the same action multiple
times with high accuracy.
o Useful for risky areas: AI machines can be helpful in situations such as defusing a bomb,
exploring the ocean floor, where to employ a human can be risky.
o Digital Assistant: AI can be very useful to provide digital assistant to the users such as AI
technology is currently used by various E-commerce websites to show the products as per
customer requirement.
o Useful as a public utility: AI can be very useful for public utilities such as a self-driving car
which can make our journey safer and hassle-free, facial recognition for security purpose,
Natural language processing to communicate with the human in human-language, etc.
Every technology has some disadvantages, and the same goes for Artificial intelligence. Being so
advantageous technology still, it has some disadvantages which we need to keep in our mind while
creating an AI system. Following are the disadvantages of AI:
o High Cost: The hardware and software requirement of AI is very costly as it requires lots of
maintenance to meet current world requirements.
o Can't think out of the box: Even we are making smarter machines with AI, but still they cannot
work out of the box, as the robot will only do that work for which they are trained, or
programmed.
o No feelings and emotions: AI machines can be an outstanding performer, but still it does not
have the feeling so it cannot make any kind of emotional attachment with human, and may
sometime be harmful for users if the proper care is not taken.
o Increase dependency on machines: With the increment of technology, people are getting more
dependent on devices and hence they are losing their mental capabilities.
o No Original Creativity: As humans are so creative and can imagine some new ideas but still
AI machines cannot beat this power of human intelligence and cannot be creative and
imaginative.
o
Task Classification of AI
The domain of AI is classified into Formal tasks, Mundane tasks, and Expert tasks.
Task Domains of Artificial Intelligence
Planing Creativity
Robotics
Locomotive
Humans learn mundane (ordinary) tasks since their birth. They learn by perception, speaking, using
language, and locomotives. They learn Formal Tasks and Expert Tasks later, in that order.
For humans, the mundane tasks are easiest to learn. The same was considered true before trying to
implement mundane tasks in machines. Earlier, all work of AI was concentrated in the mundane task
domain.
Later, it turned out that the machine requires more knowledge, complex knowledge representation,
and complicated algorithms for handling mundane tasks. This is the reason why AI work is more
prospering in the Expert Tasks domain now, as the expert task domain needs expert knowledge
without common sense, which can be easier to represent and handle.
Types of Artificial Intelligence:
Artificial Intelligence can be divided in various types, there are mainly two types of main
categorization which are based on capabilities and based on functionally of AI. Following is flow
diagram which explain the types of AI.
2. General AI:
o General AI is a type of intelligence which could perform any intellectual task with efficiency
like a human. o The idea behind the general AI to make such a system which could be smarter
and think like a human by its own.
o Currently, there is no such system exist which could come under general AI and can perform
any task as perfect as a human.
o The worldwide researchers are now focused on developing machines with General AI.
o As systems with general AI are still under research, and it will take lots of efforts and time to
develop such systems.
3. Super AI: o Super AI is a level of Intelligence of Systems at which machines could surpass
human intelligence, and can perform any task better than human with cognitive properties. It is an
outcome of general AI.
o Some key characteristics of strong AI include capability include the ability to think, to
reason,solve the puzzle, make judgments, plan, learn, and communicate by its own.
o Super AI is still a hypothetical concept of Artificial Intelligence. Development of such systems
in real is still world changing task.
2. Limited Memory o Limited memory machines can store past experiences or some data
for a short period of time. o These machines can use stored data for a limited time
period only.
o Self-driving cars are one of the best examples of Limited Memory systems. These cars can store
recent speed of nearby cars, the distance of other cars, speed limit, and other information to
navigate the road.
3. Theory of Mind o Theory of Mind AI should understand the human emotions, people,
beliefs, and be able to interact socially like humans.
o This type of AI machines are still not developed, but researchers are making lots of efforts and
improvement for developing such AI machines.
Agent Terminology
• Performance Measure of Agent − It is the criteria, which determines how successful an
agent is.
• Behavior of Agent − It is the action that agent performs after any given sequence of percepts.
• Percept − It is agent’s perceptual inputs at a given instance.
• Percept Sequence − It is the history of all that an agent has perceived till date.
Rationality is nothing but status of being reasonable, sensible, and having good sense of judgment.
Rationality is concerned with expected actions and results depending upon what the agent has
perceived. Performing actions with the aim of obtaining useful information is an important part of
rationality.
Turing Test
The success of an intelligent behavior of a system can be measured with Turing Test.
Two persons and a machine to be evaluated participate in the test. Out of the two persons, one plays
the role of the tester. Each of them sits in different rooms. The tester is unaware of who is machine
and who is a human. He interrogates the questions by typing and sending them to both intelligences,
to which he receives typed responses.
This test aims at fooling the tester. If the tester fails to determine machine’s response from the human
response, then the machine is said to be intelligent.
Properties of Environment
The environment has multifold properties −
• Discrete / Continuous − If there are a limited number of distinct, clearly defined, states of the
environment, the environment is discrete (For example, chess); otherwise it is continuous (For
example, driving).
• Observable / Partially Observable − If it is possible to determine the complete state of the
environment at each time point from the percepts it is observable; otherwise it is only partially
observable.
• Static / Dynamic − If the environment does not change while an agent is acting, then it is static;
otherwise it is dynamic.
• Single agent / Multiple agents − The environment may contain other agents which may be of
the same or different kind as that of the agent.
• Accessible / Inaccessible − If the agent’s sensory apparatus can have access to the complete
state of the environment, then the environment is accessible to that agent.
• Deterministic / Non-deterministic − If the next state of the environment is completely
determined by the current state and the actions of the agent, then the environment is
deterministic; otherwise it is non-deterministic.
• Episodic / Non-episodic − In an episodic environment, each episode consists of the agent
perceiving and then acting. The quality of its action depends just on the episode itself.
Subsequent episodes do not depend on the actions in the previous episodes. Episodic
environments are much simpler because the agent does not need to think ahead.
Problem Solving
Problem:
A problem, which can be caused for different reasons, and, if solvable, can usually be
solved in a number of different ways, is defined in a number of different ways.
State space is a set of legal positions, starting at the initial state, using the set of rules to
move from one state to another and attempting to end up in a goal state.
Searching Algorithm
Search Algorithm Terminologies:
AO* Search
Uninformed/Blind Search:
The uninformed search does not contain any domain knowledge such as closeness, the location of the
goal. It operates in a brute-force way as it only includes information about how to traverse the tree and
how to identify leaf and goal nodes. Uninformed search applies a way in which search tree is searched
without any information about the search space like initial state operators and test for the goal, so it is
also called blind search.It examines each node of the tree until it achieves the goal node.
o Breadth-first search
o Uniform cost search
o Depth-first search o
Iterative deepening
depth-first search o
Bidirectional Search
Informed Search
Informed search algorithms use domain knowledge. In an informed search, problem information is
available which can guide the search. Informed search strategies can find a solution more efficiently
than an uninformed search strategy. Informed search is also called a Heuristic search.
A heuristic is a way which might not always be guaranteed for best solutions but guaranteed to find a
good solution in reasonable time.
Informed search can solve much complex problem which could not be solved in another way.
1. Greedy Search
2. A* Search
3. AO* Search
1. Breadth-first Search
2. Depth-first Search
3. Depth-limited Search
4. Iterative deepening depth-first search
5. Uniform cost search
6. Bidirectional Search
1. Breadth-first Search:
o Breadth-first search is the most common search strategy for traversing a tree or graph. This
algorithm searches breadthwise in a tree or graph, so it is called breadth-first search.
o BFS algorithm starts searching from the root node of the tree and expands all successor node
at the current level before moving to nodes of next level.
o The breadth-first search algorithm is an example of a general-graph search algorithm. o
Breadth-first search implemented using FIFO queue data structure.
Advantages:
o If there are more than one solutions for a given problem, then BFS will provide the
minimal solution which requires the least number of steps.
Disadvantages:
o It requires lots of memory since each level of the tree must be saved into memory to
expand the next level. o BFS needs lots of time if the solution is far away from the
root node.
Example:
In the below tree structure, we have shown the traversing of the tree using BFS algorithm from the
root node S to goal node K. BFS search algorithm traverse in layers, so it will follow the path which
is shown by the dotted arrow, and the traversed path will be:
1. S---> A--->B---->C--->D---->G--->H--->E---->F---->I---->K
Time Complexity: Time Complexity of BFS algorithm can be obtained by the number of nodes
traversed in BFS until the shallowest Node. Where the d= depth of shallowest solution and b is a node
at every state.
Space Complexity: Space complexity of BFS algorithm is given by the Memory size of frontier which
is O(bd).
Completeness: BFS is complete, which means if the shallowest goal node is at some finite depth, then
BFS will find a solution.
Optimality: BFS is optimal if path cost is a non-decreasing function of the depth of the node.
2. Depth-first Search:
o Depth-first search isa recursive algorithm for traversing a tree or graph data structure.
o It is called the depth-first search because it starts from the root node and follows each path to
its greatest depth node before moving to the next path. o DFS uses a stack data structure for
its implementation.
o The process of the DFS algorithm is similar to the BFS algorithm.
Note: Backtracking is an algorithm technique for finding all possible solutions using recursion.
Advantage:
o DFS requires very less memory as it only needs to store a stack of the nodes on the path from
root node to the current node.
o It takes less time to reach to the goal node than BFS algorithm (if it traverses in the right path).
Disadvantage:
o There is the possibility that many states keep re-occurring, and there is no guarantee of finding
the solution.
o DFS algorithm goes for deep down searching and sometime it may go to the infinite loop.
Example:
In the below search tree, we have shown the flow of depth-first search, and it will follow the order as:
It will start searching from root node S, and traverse A, then B, then D and E, after traversing E, it
will backtrack the tree as E has no other successor and still goal node is not found. After
backtracking it will traverse node C and then G, and here it will terminate as it found goal node.
Completeness: DFS search algorithm is complete within finite state space as it will expand every node
within a limited search tree.
Time Complexity: Time complexity of DFS will be equivalent to the node traversed by the algorithm.
It is given by:
Where, m= maximum depth of any node and this can be much larger than d (Shallowest
solution depth)
Space Complexity: DFS algorithm needs to store only single path from the root node, hence space
complexity of DFS is equivalent to the size of the fringe set, which is O(bm).
Optimal: DFS search algorithm is non-optimal, as it may generate a large number of steps or high
cost to reach to the goal node.
o Standard failure value: It indicates that problem does not have any solution. o Cutoff
failure value: It defines no solution for the problem within a given depth limit.
Advantages:
Disadvantages:
Example:
Completeness: DLS search algorithm is complete if the solution is above the depth-limit.
Optimal: Depth-limited search can be viewed as a special case of DFS, and it is also not optimal even
if ℓ>d.
Uniform-cost search is a searching algorithm used for traversing a weighted tree or graph. This
algorithm comes into play when a different cost is available for each edge. The primary goal of the
uniform-cost search is to find a path to the goal node which has the lowest cumulative cost.
Uniformcost search expands nodes according to their path costs form the root node. It can be used to
solve any graph/tree where the optimal cost is in demand. A uniform-cost search algorithm is
implemented by the priority queue. It gives maximum priority to the lowest cumulative cost. Uniform
cost search is equivalent to BFS algorithm if the path cost of all edges is the same.
Advantages: o Uniform cost search is optimal because at every state the path with the least
cost is chosen.
Disadvantages:
o It does not care about the number of steps involve in searching and only concerned about path
cost. Due to which this algorithm may be stuck in an infinite loop.
Example:
Completeness:
Uniform-cost search is complete, such as if there is a solution, UCS will find it.
Time Complexity:
Let C* is Cost of the optimal solution, and ε is each step to get closer to the goal node. Then the
number of steps is = C*/ε+1. Here we have taken +1, as we start from state 0 and end to C*/ε.
Space Complexity:
The same logic is for space complexity so, the worst-case space complexity of Uniform-cost search is
O(b1 + [C*/ε]).
Optimal:
Uniform-cost search is always optimal as it only selects a path with the lowest path cost.
This algorithm performs depth-first search up to a certain "depth limit", and it keeps increasing the
depth limit after each iteration until the goal node is found.
This Search algorithm combines the benefits of Breadth-first search's fast search and depth-first
search's memory efficiency.
The iterative search algorithm is useful uninformed search when search space is large, and depth of
goal node is unknown.
Advantages:
o It combines the benefits of BFS and DFS search algorithm in terms of fast search and memory
efficiency.
Disadvantages: o The main drawback of IDDFS is that it repeats all the work of the
previous phase.
Example:
Following tree structure is showing the iterative deepening depth-first search. IDDFS algorithm
performs various iterations until it does not find the goal node. The iteration performed by the
algorithm is given as:
1'st Iteration-----> A
2'nd Iteration----> A, B, C
3'rd Iteration------>A, B, D, E, C, F, G
4'th Iteration------>A, B, D, H, I, E, C, F, K, G
In the fourth iteration, the algorithm will find the goal node.
Completeness:
Time Complexity:
Let's suppose b is the branching factor and depth is d then the worst-case time complexity is O(bd).
Space Complexity:
Optimal:
IDDFS algorithm is optimal if path cost is a non- decreasing function of the depth of the node.
Bidirectional search can use search techniques such as BFS, DFS, DLS, etc.
Advantages:
Disadvantages:
Example:
In the below search tree, bidirectional search algorithm is applied. This algorithm divides one
graph/tree into two sub-graphs. It starts traversing from node 1 in the forward direction and starts
from goal node 16 in the backward direction.
To solve large problems with large number of possible states, problem-specific knowledge needs to
be added to increase the efficiency of search algorithms.
The uninformed search algorithms which looked through search space for all possible solutions of the
problem without having any additional knowledge about search space. But informed search algorithm
contains an array of knowledge such as how far we are from the goal, path cost, how to reach to goal
node, etc. This knowledge help agents to explore less to the search space and find more efficiently the
goal node.
The informed search algorithm is more useful for large search space. Informed search algorithm uses
the idea of heuristic, so it is also called Heuristic search.
Heuristics function: Heuristic is a function which is used in Informed Search, and it finds the most
promising path. It takes the current state of the agent as its input and produces the estimation of how
close agent is from the goal. The heuristic method, however, might not always give the best solution,
but it guaranteed to find a good solution in reasonable time. Heuristic function estimates how close a
state is to the goal. It is represented by h(n), and it calculates the cost of an optimal path between the
pair of states. The value of the heuristic function is always positive.
In the informed search we will discuss three main algorithms which are given below:
Greedy best-first search algorithm always selects the path which appears best at that moment. It is the
combination of depth-first search and breadth-first search algorithms. It uses the heuristic function and
search. Best-first search allows us to take the advantages of both algorithms. With the help of bestfirst
search, at each step, we can choose the most promising node. In the best first search algorithm, we
expand the node which is closest to the goal node and the closest cost is estimated by heuristic function,
i.e. f(n)= g(n).
Advantages:
o Best first search can switch between BFS and DFS by gaining the advantages of both the
algorithms.
o This algorithm is more efficient than BFS and DFS algorithms.
Disadvantages:
o It can behave as an unguided depth-first search in the worst case scenario.
o It can get stuck in a loop as DFS. o This algorithm is not optimal.
Example:
Consider the below search problem, and we will traverse it using greedy best-first search. At each
iteration, each node is expanded using evaluation function f(n)=h(n) , which is given in the below
table.
In this search example, we are using two lists which are OPEN and CLOSED Lists. Following are
the iteration for traversing the above example.
Expand the nodes of S and put in the CLOSED list
Time Complexity: The worst case time complexity of Greedy best first search is O(b m).
Space Complexity: The worst case space complexity of Greedy best first search is O(bm). Where, m
is the maximum depth of the search space.
Complete: Greedy best-first search is also incomplete, even if the given state space is finite.
A* search is the most commonly known form of best-first search. It uses heuristic function h(n), and
cost to reach the node n from the start state g(n). It has combined features of UCS and greedy bestfirst
search, by which it solve the problem efficiently. A* search algorithm finds the shortest path through
the search space using the heuristic function. This search algorithm expands less search tree and
provides optimal result faster. A* algorithm is similar to UCS except that it uses g(n)+h(n) instead of
g(n).
In A* search algorithm, we use search heuristic as well as the cost to reach the node. Hence we can
combine both costs as following, and this sum is called as a fitness number.
At each point in the search space, only those node is expanded which have the lowest value of f(n),
and the algorithm terminates when the goal node is found.
Algorithm of A* search:
Step 2: Check if the OPEN list is empty or not, if the list is empty then return failure and stops.
Step 3: Select the node from the OPEN list which has the smallest value of evaluation function (g+h),
if node n is goal node then return success and stop, otherwise
Step 4: Expand node n and generate all of its successors, and put n into the closed list. For each
successor n', check whether n' is already in the OPEN or CLOSED list, if not then compute evaluation
function for n' and place into Open list.
Step 5: Else if node n' is already in OPEN and CLOSED, then it should be attached to the back pointer
which reflects the lowest g(n') value.
Advantages:
o A* search algorithm is the best algorithm than other search algorithms. o A* search
algorithm is optimal and complete.
o This algorithm can solve very complex problems.
Disadvantages:
o It does not always produce the shortest path as it mostly based on heuristics and approximation.
o A* search algorithm has some complexity issues.
o The main drawback of A* is memory requirement as it keeps all generated nodes in the
memory, so it is not practical for various large-scale problems.
Example:
In this example, we will traverse the given graph using the A* algorithm. The heuristic value of all
states is given in the below table so we will calculate the f(n) of each state using the formula f(n)= g(n)
+ h(n), where g(n) is the cost to reach any node from start state.
Here we will use OPEN and CLOSED list.
Solution:
Points to remember:
o A* algorithm returns the path which occurred first, and it does not search for all remaining
paths.
o Admissible: the first condition requires for optimality is that h(n) should be an admissible
heuristic for A* tree search. An admissible heuristic is optimistic in nature.
o Consistency: Second required condition is consistency for only A* graph-search.
If the heuristic function is admissible, then A* tree search will always find the least cost path.
Time Complexity: The time complexity of A* search algorithm depends on heuristic function, and
the number of nodes expanded is exponential to the depth of solution d. So the time complexity is
O(b^d), where b is the branching factor.
Our real-life situations can’t be exactly decomposed into either AND tree or OR tree but is always a
combination of both. So, we need an AO* algorithm where O stands for ‘ordered’. AO* algorithm
represents a part of the search graph that has been explicitly generated so far. AO* algorithm is given
as follows:
• Step-7: If the starting node is SOLVED or value is greater than FUTILITY then stop else repeat
from Step-2.
o Hill climbing algorithm is a local search algorithm which continuously moves in the direction
of increasing elevation/value to find the peak of the mountain or best solution to the problem.
It terminates when it reaches a peak value where no neighbor has a higher value.
o Hill climbing algorithm is a technique which is used for optimizing the mathematical problems.
One of the widely discussed examples of Hill climbing algorithm is Traveling-salesman
Problem in which we need to minimize the distance traveled by the salesman.
o It is also called greedy local search as it only looks to its good immediate neighbor state and
not beyond that.
o A node of hill climbing algorithm has two components which are state and value.
o Hill Climbing is mostly used when a good heuristic is available. o In this algorithm, we don't
need to maintain and handle the search tree or graph as it only keeps a single current state.
o Generate and Test variant: Hill Climbing is the variant of Generate and Test method. The
Generate and Test method produce feedback which helps to decide which direction to move in
the search space.
o Greedy approach: Hill-climbing algorithm search moves in the direction which optimizes the
cost. o No backtracking: It does not backtrack the search space, as it does not remember the
previous states.
The state-space landscape is a graphical representation of the hill-climbing algorithm which is showing
a graph between various states of algorithm and Objective function/Cost.
On Y-axis we have taken the function which can be an objective function or cost function, and
statespace on the x-axis. If the function on Y-axis is cost then, the goal of search is to find the global
minimum and local minimum. If the function of Y-axis is Objective function, then the goal of the
search is to find the global maximum and local maximum.
Local Maximum: Local maximum is a state which is better than its neighbor states, but there is also
another state which is higher than it.
Global Maximum: Global maximum is the best possible state of state space landscape. It has the
highest value of objective function.
Flat local maximum: It is a flat space in the landscape where all the neighbor states of current states
have the same value.
Simple hill climbing is the simplest way to implement a hill climbing algorithm. It only evaluates the
neighbor node state at a time and selects the first one which optimizes current cost and set it as
a current state. It only checks it's one successor state, and if it finds better than the current state, then
move else be in the same state. This algorithm has the following features:
o Less time consuming o Less optimal solution and the solution is not guaranteed
Algorithm for Simple Hill Climbing:
o Step 1: Evaluate the initial state, if it is goal state then return success and Stop. o Step 2:
Loop Until a solution is found or there is no new operator left to apply. o Step 3: Select and
apply an operator to the current state.
o Step 4: Check new state:
a. If it is goal state, then return success and quit.
b. Else if it is better than the current state then assign new state as a current state.
c. Else if not better than the current state, then return to step2.
o Step 5: Exit.
The steepest-Ascent algorithm is a variation of simple hill climbing algorithm. This algorithm
examines all the neighboring nodes of the current state and selects one neighbor node which is closest
to the goal state. This algorithm consumes more time as it searches for multiple neighbors
Stochastic hill climbing does not examine for all its neighbor before moving. Rather, this search
algorithm selects one neighbor node at random and decides whether to choose it as a current state or
examine another state.
Solution: Backtracking technique can be a solution of the local maximum in state space landscape.
Create a list of the promising path so that the algorithm can backtrack the search space and explore
other paths as well.
2. Plateau: A plateau is the flat area of the search space in which all the neighbor states of the
current state contains the same value, because of this algorithm does not find any best direction to
move. A hill-climbing search might be lost in the plateau area.
Solution: The solution for the plateau is to take big steps or very little steps while searching, to solve
the problem. Randomly select a state which is far away from the current state so it is possible that the
algorithm could find non-plateau region.
3. Ridges: A ridge is a special form of the local maximum. It has an area which is higher than its
surrounding areas, but itself has a slope, and cannot be reached in a single move.
Solution: With the use of bidirectional search, or by moving in different directions, we can improve
this problem.
Simulated Annealing:
A hill-climbing algorithm which never makes a move towards a lower value guaranteed to be
incomplete because it can get stuck on a local maximum. And if algorithm applies a random walk, by
moving a successor, then it may complete but not efficient. Simulated Annealing is an algorithm
which yields both efficiency and completeness.
In mechanical term Annealing is a process of hardening a metal or glass to a high temperature then
cooling gradually, so this allows the metal to reach a low-energy crystalline state. The same process is
used in simulated annealing in which the algorithm picks a random move, instead of picking the best
move. If the random move improves the state, then it follows the same path. Otherwise, the algorithm
follows the path which has a probability of less than 1 or it moves downhill and chooses another path.
• states and goal test conform to a standard, structured and simple representation general-
purpose heuristic
A constraint satisfaction problem (or CSP) is defined by a set of variables- X1, X2, . . . , Xn, and a set
of constraints, C1, C2, . . . , Cm. Each variable Xi has a CONSTRAINTS nonempty domain Di of
possible values. Each constraint Ci involves some subset of the DOMAIN VALUES variables and
specifies the allowable combinations of values for that subset. A state of the problem is defined by an
assignment of valuesto some or all of the variables, {Xi = vi , Xj = ASSIGNMENT vj , . . .}. An
assignment that does not violate any constraints is called a consistent or legal CONSISTENT
assignment. A complete assignment is one in which every variable is mentioned, and a solution to a
CSP is a complete assignment that satisfies all the constraints. Some CSPs also require a solution that
maximizes an objective function.
Solution:
• Each state in a CSP is defined by an assignment of values to some or all of the variables
• An assignment that does not violate any constraints is called a consistent or legal assignment
• A complete assignment is one in which every variable is assigned
• A solution to a CSP is consistent and complete assignment
• Allows useful general-purpose algorithms with more power than standard search algorithms