Aiml Class PPT Unit 1

Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 131

PROBLEM SOLVING

UNIT I PROBLEM SOLVING


• Introduction to AI
• AI Applications
• Problem solving agents
• Search algorithms
• Uninformed
• Search strategies
• Heuristic search strategies
• Local search and optimization problems
• Adversarial search
• Constraint satisfaction problems (CSP)
INTRODUCTION

INTELLIGENCE
 The capacity to learn and solve problems.
 In particular, the ability to solve novel problems (i.e
solve new problems) the ability to act rationally (i.e
act based on reason).
 the ability to act like humans.
 AI is concerned with the design of intelligence in an
artificial device, coined by John McCarthy in 1956.
 AI is an attempt to build machines that like humans
can think and act.
What is involved in intelligence?
What is involved in intelligence?
• Ability to interact with the real world to
perceive(identify), understand, and act
– e.g., speech recognition, understanding and
creation
– e.g., image understanding
– e.g., ability to take actions, have an effect
Reasoning and Planning
– modeling the external world, given input
– solving new problems, planning, and making
decisions
– ability to deal with unexpected problems and
uncertainties
Learning and Adaptation
– we are continuously learning and adapting
– our internal models are always being updated‖
• e.g., a baby learning to categorize and
recognize animals
ARTIFICIAL INTELLIGENCE
• According to the father of Artificial Intelligence,
John McCarthy, it is “The science and
engineering of making intelligent machines,
especially intelligent computer programs”
• The term AI is defined by each author in own
ways which falls into 4 categories
1. The system that think like humans.
2. System that act like humans.
3. Systems that think rationally.
4. Systems that act rationally.
SOME DEFINITIONS OF AI

• Building systems that think like humans


―The exciting new effort to make computers
think ... machines with minds, in the full and
literal sense‖ -- Haugeland, 1985
―The automation of activities that we associate
with human thinking, ... such as decision-
making, problem solving, learning, ...‖ Bellman,
1978
• Building systems that act like humans
―The art of creating machines that perform
functions that require intelligence when
performed by people‖ -- Kurzweil, 1990
―The study of how to make computers do
things at which, at the moment, people are
better‖ -- Rich and Knight, 1991
• Building systems that think rationally
―The study of mental faculties through the use
of computational models‖ -- Charniak and
McDermott, 1985
―The study of the computations that make it
possible to perceive, reason, and act‖-
Winston, 1992
• Building systems that act rationally
Computational Intelligence is the study of the
design of intelligence agents (Poole et.al,
1998).
“AI is concerned with intelligent behavior in
artifacts(Objects)”.(Nilsson,1998)
Goals of AI

• To Implement Human Intelligence in Machines


− Creating systems that understand, think,
learn, and behave like humans.
• To Create Expert Systems − Expert System is
an application using Artificial Intelligence.
• The systems which exhibit intelligent behavior,
learn, demonstrate, explain, and advice its
users (i.e., Expert System is used to provide
expert advice and guidance for various
activities).
Advantages of Artificial Intelligence

• High Accuracy with less errors: AI machines or systems are prone to less
errors and high accuracy as it takes decisions as per pre-experience or
information.
• High-Speed: AI systems can be of very high-speed and fast-decision making,
because of that AI systems can beat a chess champion in the Chess game.
• High reliability: AI machines are highly reliable and can perform the same
action multiple times with high accuracy.
• Useful for risky areas: AI machines can be helpful in situations such as
defusing a bomb, exploring the ocean floor, where to employ a human can
be risky.
• Digital Assistant: AI can be very useful to provide digital assistant to the
users such as AI technology is currently used by various E-commerce
websites to show the products as per customer requirement.
• Useful as a public utility: AI can be very useful for public utilities such as a
self-driving car which can make our journey safer and hassle-free, facial
recognition for security purpose, Natural language processing to
communicate with the human in human-language, etc.
Disadvantages of Artificial Intelligence

High Cost: The hardware and software requirement of AI is very costly


as it requires lots of maintenance to meet current world
requirements.
Can't think out of the box: Even we are making smarter machines with
AI, but still they cannot work out of the box, as the robot will only do
that work for which they are trained, or programmed.
No feelings and emotions: AI machines can be an outstanding
performer, but still it does not have the feeling so it cannot make any
kind of emotional attachment with human, and may sometime be
harmful for users if the proper care is not taken.
Increase dependency on machines: With the increment of technology,
people are getting more dependent on devices and hence they are
losing their mental capabilities.
No Original Creativity: As humans are so creative and can imagine some
new ideas but still AI machines cannot beat this power of human
intelligence and cannot be creative and imaginative.
Acting Humanly: The Turing Test Approach
• Test proposed by Alan Turing in 1950
• The computer is asked questions by a human interrogator.
• The computer passes the test if a human interrogator, after posing
some written questions, cannot tell whether the written responses
come from a person or not.
• Programming a computer to pass, the computer need to possess the
following capabilities:
– Natural language processing to enable it to communicate successfully in
English.
– Knowledge representation to store what it knows or hears
– Automated reasoning to use the stored information to answer questions and
to draw new conclusions.
– Machine learning to adapt to new circumstances and to detect and
extrapolate patterns.
• To pass the complete Turing Test, the computer will need
– Computer vision to perceive the objects, and
– Robotics to manipulate objects and move about.
Some definitions
(a)Intelligence - Ability to apply knowledge in order
to perform better in an environment.
(b)Artificial Intelligence - Study and construction of
agent programs that perform well in a given
environment, for a given agent architecture.
(c)Agent - An entity that takes action in response to
precepts from an environment.
(d)Rationality - property of a system which does the
“right thing” given what it knows.
(e)Logical Reasoning - A process of deriving new
sentences from old, such that the new sentences
are necessarily true if the old ones are true
FUTURE OF ARTIFICIAL INTELLIGENCE
• Transportation: Autonomous cars will one day ship us
from place to place.
• Manufacturing: AI powered robots work alongside
humans to perform a limited range of tasks like assembly
and stacking, and predictive analysis sensors keep
equipment running smoothly.
• Healthcare: In the field of healthcare, diseases are more
quickly and accurately diagnosed, drug discovery is
speed up and streamlined, virtual nursing assistants
monitor patients and big data analysis helps to create a
more personalized patient experience.
• Media: Journalism is harnessing AI, too, and will
continue to benefit from it. Cyborg technology help
make quick sense of complex financial reports.
• Customer Service: Google is working on an AI
assistant that can place human-like calls to make
appointments.
• Education: Textbooks are digitized with the help of
AI, early-stage virtual tutors assist human
instructors and facial analysis determine the
emotions of students to help determine who’s
struggling or bored and better tailor the experience
to their individual needs
AGENTS AND ITS TYPES
•An agent is anything that can be viewed as perceiving its
environment through sensors and acting upon that environment
through actuators.
An agent can be
• Human-Agent: A human agent has eyes, ears, and other
organs which work for sensors and hand, legs, vocal
tract work for actuators.
• Robotic Agent: A robotic agent can have cameras,
infrared range finder, NLP for sensors and various
motors for actuators.
• Software Agent: Software agent can have keystrokes,
file contents as sensory input and act on those inputs
and display output on the screen.
• Hence the world around us is full of agents such as
thermostat, cell phone, camera, and even we are also
agents.
• Before moving forward, we should first know about
sensors, effectors, and actuators.
• Sensor: Sensor is a device which detects the
change in the environment and sends the
information to other electronic devices. An
agent observes its environment through
sensors
• Actuators: Actuators are the component of
machines that converts energy into motion.
The actuators are only responsible for moving
and controlling a system. An actuator can be an
electric motor, gears, etc.
• Effectors: Effectors are the devices which affect
the environment. Effectors can be legs, wheels,
arms, fingers, wings, fins, and display screen
• Human Sensors:
– Eyes, ears, and other organs for sensors.
• Human Actuators:
– Hands, legs, mouth, and other body parts.
• Robotic Sensors:
– Mic, cameras and infrared range finders for sensors
• Robotic Actuators:
– Motors, Display, speakers etc
Percepts:
Refers to agent perceptual inputs at any given
instant
Percepts Sequence:
Complete history of everything the agent has
ever perceived.
Agent Function:
Maps any given percept sequence to an action.
Agent Program:
Runs on agent architecture
CHARACTERISTICS OF INTELLIGENT AGENTS
1.Situatedness: The agent receives some form of sensory input from
its environment, and it performs some action that changes its
environment in some way.
• Examples of environments: the physical world and the Internet.
2.Autonomy: The agent can act without direct intervention by
humans or other agents and that it has control over its own actions
and internal state.
3.Adaptivity:
The agent is capable of
• (1) reacting flexibly to changes in its environment;
• (2)taking goal-directed initiative (i.e., is pro-active), when
appropriate;
• (3)Learning from its own experience, its environment, and
interactions with others
4.Sociability: The agent is capable of interacting in a peer-to-peer
manner with other agents or humans
PEAS Representation
Categories of Agents

• Simple Reflex Agent


• Model Based Reflex Agents
• Goal Based Agents
• Utility based Agents
• Learning Agents
1.Simple Reflex Agent
• Simplest agents, which take decisions on the
basis of current percepts and ignore the rest of
percept history.
• Only succeed in the fully observable
environment.
• Does not consider any part of percept history
during their decision and action process.
• Works on condition action rule – maps the
current state to action.
• For example: Room cleaner agent, works only
if there is dirt in the room.
• Problems for the simple reflex agent design
approach:
– They have very limited intelligence
– They do not have knowledge of non-perceptual
parts of the current state
– Mostly too big to generate and to store.
– Not adaptive to changes in the environment.
2.Model-based reflex agent

• The Model-based agent can work in a partially


observable environment, and track the situation.
• A model-based agent has two important factors:
– Model: It is knowledge about "how things happen in the
world," so it is called a Model-based agent.
– Internal State: It is a representation of the current state
based on percept history.
• These agents have the model, "which is knowledge
of the world" and based on the model they perform
actions.
• Updating the agent state requires information about:
– How the world evolves?
– How the agent's action affects the world?
3.Goal-based agents

• The knowledge of the current state environment is not


always sufficient to decide for an agent to what to do.
• The agent needs to know its goal which describes
desirable situations.
• Goal-based agents expand the capabilities of the model-
based agent by having the "goal" information.
• They choose an action, so that they can achieve the goal.
• These agents may have to consider a long sequence of
possible actions before deciding whether the goal is
achieved or not.
• Such considerations of different scenario are called
searching and planning, which makes an agent proactive.
4.Utility-based agents
• These agents are similar to the goal-based agent but
provide an extra component of utility measurement
which makes them different by providing a measure of
success at a given state.
• Utility-based agent act based not only goals but also the
best way to achieve the goal.
• The Utility-based agent is useful when there are
multiple possible alternatives, and an agent has to
choose in order to perform the best action.
• The utility function maps each state to a real number to
check how efficiently each action achieves the goals.
5.Learning Agents

• A learning agent in AI is the type of agent which can learn from


its past experiences, or it has learning capabilities.
• It starts to act with basic knowledge and then able to act and
adapt automatically through learning.
• A learning agent has mainly four conceptual components, which
are:
– Learning element: It is responsible for making improvements by
learning from environment
– Critic: Learning element takes feedback from critic which describes
that how well the agent is doing with respect to a fixed performance
standard.
– Performance element: It is responsible for selecting external action
– Problem generator: This component is responsible for suggesting
actions that will lead to new and informative experiences.
• Hence, learning agents are able to learn, analyze performance,
and look for new ways to improve the performance.
The Structure of Intelligent Agents
• Agent = Architecture + Agent Program
• Architecture = the machinery that an agent executes on.
• (Hardware) Agent Program = an implementation of an
agent function. (Algorithm, Logic – Software)
Problem Searching
• In general, searching refers to finding
information for one needs.
• Searching is the most commonly used technique
of problem solving in artificial intelligence.
• The searching algorithm helps us to search for
solution
• Problem: Problems are the issues which comes
across any system.
• A solution is needed to solve that particular
problem
Solve Problem Using Artificial Intelligence
The process of solving a problem consists of five steps
1.Defining The Problem:
– The definition of the problem must be included precisely.
– It should contain the possible initial as well as final situations
which should result in acceptable solution.
2. Analyzing The Problem:
– Analyzing the problem and its requirement must be done as
few features can have immense impact on the resulting
solution.
3. Identification Of Solutions:
– This phase generates reasonable amount of solutions to the
given problem in a particular range.
4. Choosing a Solution:
– From all the identified solutions, the best solution is chosen
basis on the results produced by respective solutions.
5. Implementation:
– After choosing the best solution, its implementation is done
Measuring problem-solving performance
We can evaluate an algorithm’s performance in
four ways:
• Completeness: Is the algorithm guaranteed to
find a solution when there is one?
• Optimality: Does the strategy find the optimal
solution?
• Time complexity: How long does it take to find
a solution?
• Space complexity: How much memory is
needed to perform the search?
Search Algorithm Terminologies
• Search: Searching is a step by step procedure to
solve a search-problem in a given search space.
A search problem can have three main factors:
1. Search Space: Search space represents a set of
possible solutions, which a system may have.
2. Start State: It is a state from where agent
begins the search.
3. Goal test: It is a function which observe the
current state and returns whether the goal
state is achieved or not.
Types of search algorithms
Search tree
A tree representation of search problem is called
Search tree. The root of the search tree is the root
node which is corresponding to the initial state.
• Actions: It gives the description of all the available
actions to the agent.
• Transition model: A description of what each action do,
can be represented as a transition model.
• Path Cost: It is a function which assigns a numeric cost
to each path.
• Solution: It is an action sequence which leads from the
start node to the goal node.
• Optimal Solution: If a solution has the lowest cost
among all solutions.
Uninformed/Blind Search

• The uninformed search does not contain any domain knowledge


such as closeness, the location of the goal.
• It operates in a brute-force way as it only includes information about
how to traverse the tree and how to identify leaf and goal nodes.
• Uninformed search applies a way in which search tree is searched
without any information about the search space like initial state
operators and test for the goal, so it is also called blind search.
• It examines each node of the tree until it achieves the goal node.
• It can be divided into five main types:
– 1. Breadth-first search
– 2. Depth-first search
– 3. Iterative deepening depth-first search
– 4. Bidirectional Search
– 5.Uniform cost search
1. Breadth-first Search

• Breadth-first search is the most common search strategy for


traversing a tree or graph.
• This algorithm searches breadth wise in a tree or graph, so it is
called breadth-first search.
• BFS algorithm starts searching from the root node of the tree
and expands all successor node at the current level before
moving to nodes of next level.
• The breadth-first search algorithm is an example of a general-
graph search algorithm.
• Breadth first search implemented using FIFO queue data
structure.
Advantages:
• BFS will provide a solution if any solution exists.
• If there are more than one solutions for a given
problem, then BFS will provide the minimal solution
which requires the least number of steps.
Disadvantages:
• It requires lots of memory since each level of the tree
must be saved into memory to expand the next level.
• BFS needs lots of time if the solution is far away from
the root node.
Example
• In the tree structure, we
have shown the
traversing of the tree
using BFS algorithm from
the root node S to goal
node K.
• BFS search algorithm
traverse in layers, so it
will follow the path which
is shown by the dotted
arrow, and the traversed
path will be,
2.Depth-first Search

• Depth-first search is a recursive algorithm for traversing a tree or graph


data structure.
• It is called the depth-first search because it starts from the root node
and follows each path to its greatest depth node before moving to the
next path.
• DFS uses a stack data structure for its implementation. The process of
the DFS algorithm is similar to the BFS algorithm.
Advantages:
• DFS requires very less memory as it only needs to store a stack of the
nodes on the path from root node to the current node.
• It takes less time to reach to the goal node than BFS algorithm (if it
traverses in the right path).
Disadvantage:
• There is the possibility that many states keep re-occurring, and there is
no guarantee of finding the solution.
• DFS algorithm goes for deep down searching and sometime it may go
to the infinite loop.
A fringe is a DS used to store all possible states(nodes) that you
go from the current state
3.UNIFORM-COST SEARCH

• Instead of expanding the shallowest node,


uniform-cost search expands the node n with
the lowest path cost.
• Uniform-cost search does not care about the
number of steps a path has, but only about
their total cost.
4.BACKTRACKING SEARCH
• A variant of depth-first search called backtracking
search uses less memory and only one successor is
generated at a time rather than all successors.
• Only O(m) memory is needed rather than O(bm).
• The problem of unbounded trees can be ease by
supplying depth-first-search with a pre- determined
depth limit l.
• That is, nodes at depth l are treated as if they have
no successors.
• This approach is called depth-limited-search.
• The depth limit solves the infinite path problem.
5.ITERATIVE DEEPENING DEPTH-FIRST SEARCH
• Iterative deepening search (or iterative-
deepening-depth-first-search) is a general
strategy often used in combination with
depth-first-search, that finds the better depth
limit.
• It does this by gradually increasing the limit –
first 0,then 1,then 2, and so on – until a goal is
found.
Bidirectional Search

• The idea behind bidirectional search is to run two


simultaneous searches – one forward from the initial state
and the other backward from the goal, stopping when the
two searches meet in the middle.
• Forward Search : Looking in-front of the end from start.
• Backward Search : Looking from end to the start back-
wards.
• So, Bidirectional Search as the name suggests is a
combination of forwarding and backward search.
• We must traverse the tree from the start node and the
goal node and wherever they meet the path from the start
node to the goal through the intersection is the optimal
solution
II.Informed Search Algorithms

• The uninformed search algorithms which looked through search


space for all possible solutions of the problem without having
any additional knowledge about search space.
• But informed search algorithm contains an array of knowledge
such as how far we are from the goal, path cost, how to reach
to goal node, etc.
• This knowledge help agents to explore less to the search space
and find more efficiently the goal node.
• The informed search algorithm is more useful for large search
space.
• Informed search algorithm uses the idea of heuristic, so it is
also called Heuristic search.
Heuristics function

• Heuristic is a function which is used in Informed Search,


and it finds the most promising path.
• It takes the current state of the agent as its input and
produces the estimation of how close agent is from the
goal.
• The heuristic method, however, might not always give the
best solution, but it guaranteed to find a good solution in
reasonable time.
• Heuristic function estimates how close a state is to the
goal.
• It is represented by h(n), and it calculates the cost of an
optimal path between the pair of states.
• The value of the heuristic function is always positive.
• Admissibility of the heuristic function is given
as:
h(n) <= h*(n)
• Here,
– h(n) is heuristic cost, and
– h*(n) is the estimated cost.
• Hence heuristic cost should be less than or
equal to the estimated cost.
Pure Heuristic Search
• Pure heuristic search is the simplest form of heuristic search
algorithms.
• It expands nodes based on their heuristic value h(n).
• It maintains two lists, OPEN and CLOSED list.
• In the CLOSED list, it places those nodes which have already
expanded and in the OPEN list, it places nodes which have yet
not been expanded.
• On each iteration, each node n with the lowest heuristic value is
expanded and generates all its successors and n is placed to the
closed list.
• The algorithm continues unit a goal state is found.
• In the informed search we will discuss two main algorithms
which are given below:
– 1. Best First Search Algorithm(Greedy search)
– 2. A* Search Algorithm
1) Best-first Search Algorithm (Greedy Search)
• Greedy best-first search algorithm always selects the path which appears
best at that moment.
• It is the combination of depth-first search and breadth-first search
algorithms.
• It uses the heuristic function and search.
• Best-first search allows us to take the advantages of both algorithms.
• With the help of best-first search, at each step, we can choose the most
promising node.
• In the best first search algorithm, we expand the node which is closest to
the goal node and the closest cost is estimated by heuristic function, i.e.
f(n)= g(n).
• f(n)= estimated cost for cheapest solution
• g(n)=cost to reach from node(n) to goal node
• Were, h(n)= estimated cost from node n to the goal.
• The greedy best first algorithm is implemented by the priority queue
Advantages:
• Best first search can switch between BFS and
DFS by gaining the advantages of both the
algorithms.
• This algorithm is more efficient than BFS and
DFS algorithms.
Disadvantages:
• It can behave as an unguided depth-first
search in the worst case scenario.
• It can get stuck in a loop as DFS.
• This algorithm is not optimal.
• Consider the below search
problem, and we will
traverse it using greedy
best-first search.
• At each iteration, each node
is expanded using
evaluation function
f(n)=h(n) , which is given in
the below table.
• In this search example, we
are using two lists which are
OPEN and CLOSED Lists.
• Following are the iteration
for traversing the above
example
2) A* Search Algorithm

• A* search is the most commonly known form of best-first search. It


uses heuristic function h(n) which return the estimated cost from
current node to goal, and g(n) - cost to reach the node n from the
start state.
• It has combined features of Uniform Cost search and greedy best-
first search, by which it solve the problem efficiently.
• A* search algorithm finds the shortest path through the search space
using the heuristic function.
• This search algorithm expands less search tree and provides optimal
result faster.
• A* algorithm is similar to UCS except that it uses g(n)+h(n) instead of
g(n).
• In A* search algorithm, we use search heuristic as well as the cost to
reach the node.
• Hence we can combine both costs as following, and this sum is called
as a fitness number.
Advantages:
• A* search algorithm is the best algorithm than
other search algorithms.
• A* search algorithm is optimal and complete.
• This algorithm can solve very complex problems.
Disadvantages:
• It does not always produce the shortest path as it
mostly based on heuristics and approximation.
• The main drawback of A* is memory requirement
as it keeps all generated nodes in the memory, so
it is not practical for various large-scale problems
• In this example, we will traverse the given
graph using the A* algorithm.
• The heuristic value of all states is given in the
below table so we will calculate the f(n) of
each state using the formula f(n)= g(n) + h(n),
• where g(n) is the cost to reach any node from
start state.
• Here we will use OPEN and CLOSED list.
• The efficiency of A* algorithm depends on the
quality of heuristic.
• A* algorithm expands all nodes which satisfy
the condition f(n)
Complete:
• A* algorithm is complete as long as: Branching factor is finite.
• Cost at every action is fixed.
Optimal: A* search algorithm is optimal if it follows below two
conditions:
Admissible: the first condition requires for optimality is that h(n)
should be an admissible heuristic for A* tree search. An admissible
heuristic is one that never overestimates the cost to reach the goal
Consistency: If the heuristic function is admissible, then A* tree
search will always find the least cost path.
Time Complexity: The time complexity of A* search algorithm
depends on heuristic function, and the number of nodes
expanded is exponential to the depth of solution d.
So the time complexity is O(b^d), where b is the branching factor.
Space Complexity: The space complexity of A* search algorithm is
O(b^d)
Local Search Algorithms and Optimization
Problem
 The informed and uninformed search expands the nodes
systematically in two ways:
• keeping different paths in the memory and
• selecting the best suitable path, Which leads to a solution
state required to reach the goal node.
• But beyond these “classical search algorithms,” we have
some “local search algorithms” where the path cost does
not matter, and only focus on solution-state needed to
reach the goal node.
• A local search algorithm completes its task by traversing
on a single current node rather than multiple paths and
following the neighbors of that node generally.
Advantages:
• Local search algorithms use a very little or
constant amount of memory as they operate
only on a single path.
• Most often, they find a reasonable solution in
large or infinite state spaces where the
classical or systematic algorithms do not work.
Working of a Local search algorithm
Consider the below state-space landscape having both
• Location: It is defined by the state.
• Elevation: It is defined by the value of the objective
function or heuristic cost function.
The local search algorithm explores the above landscape by
finding the following two points
• Global Minimum: If the elevation corresponds to the cost,
then the task is to find the lowest valley, which is known
as Global Minimum.
• Global Maxima: If the elevation corresponds to an
objective function, then it finds the highest peak which is
called as Global Maxima.
• It is the highest point in the valley. We will understand
the working of these points better in Hill-climbing search
Some different types of local searches

• Hill-climbing Search
• Simulated Annealing
• Local Beam Search
Hill-Climbing Search
• Hill climbing search is a local search problem.
• The purpose of the hill climbing search is to climb a hill and
reach the topmost peak/ point of that hill.
• It is based on the heuristic search technique where the
person who is climbing up on the hill estimates the
direction which will lead him to the highest peak.
State-space Landscape of Hill climbing algorithm:
• To understand the concept of hill climbing algorithm,
consider the below landscape representing the goal
state/peak and the current state of the climber.
• The topographical regions shown in the figure can be
defined as:
• Global Maximum: It is the highest point on the
hill, which is the goal state.
• Local Maximum: It is the peak higher than all
other peaks but lower than the global maximum.
• Flat local maximum: It is the flat area over the
hill where it has no uphill or downhill. It is a
saturated point of the hill.
• Shoulder: It is a flat area and a region having an
edge upwards
• Current state: It is the current position of the
person
Types of Hill climbing search algorithm
There are following types of hill-climbing search:
• a) Simple hill climbing
• b) Steepest-ascent hill climbing
• c) Stochastic hill climbing
• d) Random-restart hill climbing
a) Simple hill climbing search

• Simple hill climbing is the simplest technique to climb a hill.


• The task is to reach the highest peak of the mountain.
• Here, the movement of the climber depends on his move/steps.
• If he finds his next step better than the previous one, he
continues to move else remain in the same state.
• This search focus only on his previous and next step. Simple hill
climbing Algorithm
• 1. Create a CURRENT node, NEIGHBOUR node, and a GOAL
node.
• 2. If the CURRENT node=GOAL node, return GOAL and
terminate the search.
• 3. Else CURRENT node<= NEIGHBOUR node, move ahead.
• 4. Loop until the goal is not reached or a point is not found
b) Steepest-ascent hill climbing
• Steepest-ascent hill climbing is different from simple hill climbing
search.
• Unlike simple hill climbing search, It considers all the successive
nodes, compares them, and choose the node which is closest to the
solution.
• Steepest hill climbing search is similar to best-first search because it
focuses on each node instead of one.
• Note: Both simple, as well as steepest-ascent hill climbing search,
fails when there is no closer node.
Steepest-ascent hill climbing algorithm
• 1. Create a CURRENT node and a GOAL node.
• 2. If the CURRENT node=GOAL node, return GOAL and terminate the
search.
• 3. Loop until a better node is not found to reach the solution.
• 4. If there is any better successor node present, expand it.
• 5. When the GOAL is attained, return GOAL and terminate
c) Stochastic hill climbing:
• Stochastic hill climbing does not focus on all the nodes.
It selects one node at random and decides whether it
should be expanded or search for a better one.
d) Random-restart hill climbing
• Random-restart algorithm is based on try and try
strategy.
• It iteratively searches the node and selects the best
one at each step until the goal is not found.
• The success depends most commonly on the shape of
the hill.
• If there are few plateaus, local maxima, and ridges, it
becomes easy to reach the destination
Limitations of Hill climbing algorithm

• Hill climbing algorithm is a fast


and furious approach. It finds
the solution state rapidly
because it is quite easy to
improve a bad state.
• But, there are following
limitations of this search:
Local Maxima:
• It is that peak of the mountain
which is highest than all its
neighboring states but lower
than the global maxima.
• It is not the goal peak because
there is another peak higher
than it.
Limitations of Hill climbing algorithm
• Plateau: It is a flat
surface area where no
uphill exists.
• It becomes difficult for
the climber to decide
that in which direction
he should move to
reach the goal point.
Sometimes, the person
gets lost in the flat area
Limitations of Hill climbing algorithm
• Ridges: It is a
challenging problem
where the person finds
two or more local
maxima of the same
height commonly.
• It becomes difficult for
the person to navigate
the right point and stuck
to that point itself.
2. Simulated Annealing

• Simulated annealing is similar to the hill climbing


algorithm.
• It works on the current situation. It picks a random
move instead of picking the best move.
• If the move leads to the improvement of the
current situation, it is always accepted as a step
towards the solution state, else it accepts the
move having a probability less than 1.
• This search technique was first used in 1980 to
solve VLSI layout problems.
• It is also applied for factory scheduling and other
large optimization tasks.
3. Local Beam Search

• Local beam search is quite different from random-restart


search .
• It keeps track of k states instead of just one.
• It selects k randomly generated states, and expand them
at each step.
• If any state is a goal state, the search stops with success.
• Else it selects the best k successors from the complete
list and repeats the same process.
• In random-restart search where each search process runs
independently, but in local beam search, the necessary
information is shared between the parallel search
processes.
Disadvantages of Local Beam search

• This search can suffer from a lack of


diversity(differences) among the k states.
• It is an expensive version of hill climbing
search.
• Note: A variant of Local Beam Search is
Stochastic Beam Search which selects k
successors at random rather than choosing
the best k successors.
Example: Travelling Salesman Problem
PROBLEM SOLVING APPROACH TO TYPICAL AI PROBLEMS
• Search techniques are universal problem solving
methods.
• Problem-solving agents in AI mostly used these
search strategies to solve a specific problem and
provide the best result. These goal-based agents
use atomic representation.
• Various problem solving search algorithms
available for the following problems
1. Chess.
2. Travelling Salesman Problem.
3. Tower of Hanoi Problem.
4. Water-Jug Problem.
5. N-Queen Problem
Toy Problem

• A Toy Problem is intended to illustrate various problem-


solving methods.
• A real- world problem is one whose solutions people
actually care about.
• Toy Problems Vacuum World States: The state is
determined by both the agent location and the dirt
locations.
• The agent is in one of the 2 locations, each of which
might or might not contain dirt.
• Thus there are 2*2^2=8 possible world states.
• Initial state: Any state can be designated as the initial
state.
• Actions: In this simple environment, each state has
just three actions: Left, Right, and Suck.
• Larger environments might also include Up and Down.
• Transition model: The actions have their expected
effects, except that moving Left in the leftmost square,
moving Right in the rightmost square, and Sucking in a
clean square have no effect.
• The complete state space is shown in Figure.
• Goal test: This checks whether all the squares are
clean.
• Path cost: Each step costs 1, so the path cost is the
number of steps in the path.
Arcs denotes actions: L=Left,R=Right,S=Suck
8 puzzle consists of a 3X3 board with 8 numbered titles and a
blank space.A title adjacent to the blank space can slide into the
space. The object is to reach the goal state.
• Initial state: Any state can be designated as the initial state.
• Note that any given goal can be reached from exactly half of
the possible initial states.
• The simplest formulation defines the actions as movements
of the blank space Left, Right, Up, or Down.
• Different subsets of these are possible depending on where
the blank is.
• Transition model: Given a state and action, this returns the
resulting state; for example, if we apply Left to the start
state in Figure, the resulting state has the 5 and the blank
switched.
• Goal test: This checks whether the state matches the goal
configuration shown in Figure.
• Path cost: Each step costs 1, so the path cost is the number
of steps in the path.
Constraint satisfaction problems (CSPs)
• CSP:
– state is defined by variables Xi with values from domain Di
– goal test is a set of constraints specifying allowable
combinations of values for subsets of variables

• Allows useful general-purpose algorithms with more


power than standard search algorithms

112
CSP can be viewed as standard search problem
• Initial state: the empty assignment{}, in which
all variables are unassigned.
• Successor function: a value can be assigned to
any unassigned variable provided that it does
not conflict with previously assigned variables
• Goal test: current assignment is complete
• Path cost: a constant cost for every step
Example: Map-Coloring

• country : Australia
• Variables WA, NT, Q, NSW, V, SA, T
• Domains Di = {red,green,blue}
• Constraints: adjacent regions must have different colors
• e.g., WA ≠ NT
114
Example: Map-Coloring

• Solutions are complete and consistent assignments,


e.g., WA = red, NT = green,Q = red,NSW = green,V =
red,SA = blue,T = green

115
Constraint graph
• Binary CSP: each constraint relates two variables
• Constraint graph: nodes are variables, arcs are constraints

116
Varieties of constraints
• Unary constraints involve a single variable,
– e.g., SA ≠ green

• Binary constraints involve pairs of variables,


– e.g., SA ≠ WA

• Higher-order constraints involve 3 or more variables,


– e.g., SA ≠ WA ≠ NT

117
Varieties of CSPs
Discrete variables
1.Finite domains
• The simplest kind of CSP involves variables that are discrete and
have finite domains.
• Map coloring problems are of this kind.
• The 8-queens problem can also be viewed as finite- domain CSP,
where the variables Q1,Q2,…..Q8 are the positions each queen in
columns 1, ….8 and each variable has the domain {1,2,3,4,5,6,7,8}.
• Finite domain CSPs also include Boolean CSPs, whose variables can
be either true or false.

118
2.Infinite domains Discrete variables can also
have infinite domains – for example, the set of
integers or the set of strings.
• With infinite domains, it is no longer possible
to describe constraints by enumerating all
allowed combination of values
CSPs with continuous domains

• CSPs with continuous domains are very


common in real world.
• For example in operation research field, the
scheduling of experiments on the Hubble
Telescope requires very precise timing of
observations;
• the start and finish of each observation is
continuous-valued variables that must obey a
variety of precedence and power constraints.
Adversarial Search
• Examine the problems that arise when we try to plan
ahead in a world where other agents are planning
against us.

• A good example is in board games.

• Adversarial games, while much studied in AI, are a


small part of game theory in economics.
Typical AI assumptions

• Two agents whose actions alternate


• Utility values for each agent are the opposite of the
other
– creates the adversarial situation
• Fully observable environments
• In game theory terms: Zero-sum games of perfect
information.
• We’ll relax these assumptions later.
Search versus Games
• Search – no adversary
– Solution is (heuristic) method for finding goal
– Heuristic techniques can find optimal solution
– Evaluation function: estimate of cost from start to goal
through given node
– Examples: path planning, scheduling activities
• Games – adversary
– Solution is strategy (strategy specifies move for every
possible opponent reply).
– Optimality depends on opponent. Why?
– Time limits force an approximate solution
– Evaluation function: evaluate “goodness” of game position
– Examples: chess, checkers, Othello, backgammon
Game Setup

• Two players: MAX and MIN


• MAX moves first and they take turns until the game is
over
– Winner gets award, loser gets penalty.
• Games as search:
– Initial state: e.g. board configuration of chess
– Successor function: list of (move,state) pairs specifying legal
moves.
– Terminal test: Is the game finished?
– Utility function: Gives numerical value of terminal states. E.g.
win (+1), lose (-1) and draw (0) in tic-tac-toe or chess
Size of search trees
• b = branching factor

• d = number of moves by both players

• Search tree is O(bd)

• Chess
– b ~ 35
– D ~100
- search tree is ~ 10 154 (!!)
- completely impractical to search this

• Game-playing emphasizes being able to make optimal decisions in a finite amount of time
– Somewhat realistic as a model of a real-world agent
– Even if games themselves are artificial
Partial Game Tree (2-player, deterministic,
turns)
Minimaxstrategy: Look ahead and reason backwards

• Find the optimal strategy for MAX assuming an


infallible MIN opponent
– Need to compute this all the down the tree

• Assumption: Both players play optimally!


• Given a game tree, the optimal strategy can be
determined by using the minimax value of each
node.
Alpha Beta Pruning
• Some branches will never be played by
rational players since they include sub-optimal
decisions for either player
• First, we will see the idea of Alpha Beta
Pruning
• Then, we’ll introduce the algorithm for
minimax with alpha beta pruning, and go
through the example again, showing the book-
keeping it does as it goes along
130
Alpha beta pruning. Example

131

You might also like