AI ML Unit 1
AI ML Unit 1
Before Learning about Artificial Intelligence, we should know that what is the
importance of AI and why should we learn it. Following are some main reasons to
learn about AI:
o With the help of AI, you can create such software or devices which can solve
real-world problems very easily and with accuracy such as health issues,
marketing, traffic issues, etc.
o With the help of AI, you can create your personal virtual Assistant, such as
Cortana, Google Assistant, Siri, etc.
o With the help of AI, you can build such Robots which can work in an
environment where survival of humans can be at risk.
o AI opens a path for other new technologies, new devices, and new
Opportunities.
Artificial Intelligence is not just a part of computer science even it's so vast
and requires lots of other factors which can contribute to it. To create the AI first we
should know that how intelligence is composed, so the Intelligence is an intangible
part of our brain which is a combination of Reasoning, learning, problem-solving
perception, language understanding, etc.
o Mathematics
o Biology
o Psychology
o Sociology
o Computer Science
o Neurons Study
o Statistics
o High Accuracy with less errors: AI machines or systems are prone to less
errors and high accuracy as it takes decisions as per pre-experience or
information.
o High-Speed: AI systems can be of very high-speed and fast-decision making,
because of that AI systems can beat a chess champion in the Chess game.
o High reliability: AI machines are highly reliable and can perform the same
action multiple times with high accuracy.
o Useful for risky areas: AI machines can be helpful in situations such as
defusing a bomb, exploring the ocean floor, where to employ a human can be
risky.
o Digital Assistant: AI can be very useful to provide digital assistant to the
users such as AI technology is currently used by various E-commerce websites
to show the products as per customer requirement.
o Useful as a public utility: AI can be very useful for public utilities such as a
self-driving car which can make our journey safer and hassle-free, facial
recognition for security purpose, Natural language processing to communicate
with the human in human-language, etc.
Every technology has some disadvantages, and thesame goes for Artificial
intelligence. Being so advantageous technology still, it has some disadvantages which
we need to keep in our mind while creating an AI system.
Following are some sectors which have the application of Artificial Intelligence:
1. AI in Astronomy
o Artificial Intelligence can be very useful to solve complex universe problems.
AI technology can be helpful for understanding the universe such as how it
works, origin, etc.
2. AI in Healthcare
o In the last, five to ten years, AI becoming more advantageous for the
healthcare industry and going to have a significant impact on this industry.
o Healthcare Industries are applying AI to make a better and faster diagnosis
than humans. AI can help doctors with diagnoses and can inform when
patients are worsening so that medical help can reach to the patient before
hospitalization.
3. AI in Gaming
o AI can be used for gaming purpose. The AI machines can play strategic games
like chess, where the machine needs to think of a large number of possible
places.
4. AI in Finance
o AI and finance industries are the best matches for each other. The finance
industry is implementing automation, chatbot, adaptive intelligence, algorithm
trading, and machine learning into financial processes.
5. AI in Data Security
o The security of data is crucial for every company and cyber-attacks are
growing very rapidly in the digital world. AI can be used to make your data
more safe and secure. Some examples such as AEG bot, AI2 Platform,are used
to determine software bug and cyber-attacks in a better way.
6. AI in Social Media
o Social Media sites such as Facebook, Twitter, and Snapchat contain billions of
user profiles, which need to be stored and managed in a very efficient way. AI
can organize and manage massive amounts of data. AI can analyze lots of data
to identify the latest trends, hashtag, and requirement of different users.
8. AI in Automotive Industry
o Some Automotive industries are using AI to provide virtual assistant to their
user for better performance. Such as Tesla has introduced TeslaBot, an
intelligent virtual assistant.
o Various Industries are currently working for developing self-driven cars which
can make your journey more safe and secure.
9. AI in Robotics:
o Artificial Intelligence has a remarkable role in Robotics. Usually, general
robots are programmed such that they can perform some repetitive task, but
with the help of AI, we can create intelligent robots which can perform tasks
with their own experiences without pre-programmed.
o Humanoid Robots are best examples for AI in robotics, recently the intelligent
Humanoid robot named as Erica and Sophia has been developed which can
talk and behave like humans.
10. AI in Entertainment
o We are currently using some AI based applications in our daily life with some
entertainment services such as Netflix or Amazon. With the help of ML/AI
algorithms, these services show the recommendations for programs or shows.
11. AI in Agriculture
o Agriculture is an area which requires various resources, labor, money, and
time for best result. Now a day's agriculture is becoming digital, and AI is
emerging in this field. Agriculture is applying AI as agriculture robotics, solid
and crop monitoring, predictive analysis. AI in agriculture can be very helpful
for farmers.
12. AI in E-commerce
o AI is providing a competitive edge to the e-commerce industry, and it is
becoming more demanding in the e-commerce business. AI is helping
shoppers to discover associated products with recommended size, color, or
even brand.
13. AI in education:
o AI can automate grading so that the tutor can have more time to teach. AI
chatbot can communicate with students as a teaching assistant.
o AI in the future can be work as a personal virtual tutor for students, which will
be accessible easily at any time and any place.
AGENTS IN ARTIFICIAL INTELLIGENCE (OR) INTELLIGENT AGENT
An AI system can be defined as the study of the rational agent and its
environment. The agents sense the environment through sensors and act on their
environment through actuators. An AI agent can have mental properties such as
knowledge, belief, intention, etc.
What is an Agent?
An agent can be anything that perceive its environment through sensors and
act upon that environment through actuators. An Agent runs in the cycle
of perceiving, thinking, and acting. An agent can be:
o Human-Agent: A human agent has eyes, ears, and other organs which work
for sensors and hand, legs, vocal tract work for actuators.
o Robotic Agent: A robotic agent can have cameras, infrared range finder, NLP
for sensors and various motors for actuators.
o Software Agent: Software agent can have keystrokes, file contents as sensory
input and act on those inputs and display output on the screen.
Hence the world around us is full of agents such as thermostat, cellphone, camera,
and even we are also agents.
Before moving forward, we should first know about sensors, effectors, and actuators.
Sensor: Sensor is a device which detects the change in the environment and sends the
information to other electronic devices. An agent observes its environment through
sensors.
Actuators: Actuators are the component of machines that converts energy into
motion. The actuators are only responsible for moving and controlling a system. An
actuator can be an electric motor, gears, rails, etc.
Effectors: Effectors are the devices which affect the environment. Effectors can be
legs, wheels, arms, fingers, wings, fins, and display screen.
Intelligent Agents:
PEAS Representation
o P: Performance measure
o E: Environment
o A: Actuators
o S: Sensors
Here performance measure is the objective for the success of an agent's behavior.
Goal formulation
Goal formulation, based on the current situation and the agent’s performance measure,
is the first step in problem solving.
Search
The process of looking for a sequence of actions that reaches the goal is called search.
A search algorithm takes a problem as input and returns a solution in the form of an
action sequence. Once a solution is found, the actions it recommends can be carried
out. This is called the execution phase.
Example Problems
The problem-solving approach has been applied to a vast array of task environments.
We list some of the best known here, distinguishing between toy and real-world
problems.
The first example we examine is the vacuum world
States: A state description specifies the location of each of the eight tiles and the
blank
in one of the nine squares.
• Initial state: Any state can be designated as the initial state. Note that any given
goal
can be reached from exactly half of the possible initial states.
• Actions: The simplest formulation defines the actions as movements of the blank
space
Left, Right, Up, or Down. Different subsets of these are possible depending on where
the blank is.
• Transition model: Given a state and action, this returns the resulting state; for
example,
if we apply Left to the start state the resulting state has the 5 and the blank
switched.
• Goal test: This checks whether the state matches the goal configuration.
• Path cost: Each step costs 1, so the path cost is the number of steps in the path.
The 8-puzzle belongs to the family of sliding-block puzzles, which are often used as
test problems for new search algorithms in AI. This family is known to be NP-
complete.
8-queens problem
The goal of the 8-queens problem is to place eight queens on a chessboard such that
no queen attacks any other. (A queen attacks any piece in the same row, column or
diagonal.)
Real-world problems
Route-finding algorithms are used in a variety of applications.
Consider the airline travel problems that must be solved by a travel-planning Web site:
• States: Each state obviously includes a location (e.g., an airport) and the current
time. Furthermore, because the cost of an action (a flight segment) may depend on
previous segments, their fare bases, and their status as domestic or international, the
state must record extra information about these “historical” aspects.
• Initial state: This is specified by the user’s query.
• Actions: Take any flight from the current location, in any seat class, leaving after the
current time, leaving enough time for within-airport transfer if needed.
• Transition model: The state resulting from taking a flight will have the flight’s
destination as the current location and the flight’s arrival time as the current time.
• Goal test: Are we at the final destination specified by the user?
• Path cost: This depends on monetary cost, waiting time, flight time, customs and
immigration
procedures, seat quality, time of day, type of airplane, frequent-flyer mileage awards,
and so on.
The traveling salesperson problem (TSP) is a touring problem in which each city
must be visited exactly once. The aim is to find the shortest tour. The problem is
known to be NP-hard, but an enormous amount of effort has been expended to
improve the capabilities of TSP algorithms.
SEARCH ALGORITHMS
Search algorithms are one of the most important areas of Artificial Intelligence.
This topic will explain all about the search algorithms in AI
Following are the four essential properties of search algorithms to compare the
efficiency of these algorithms:
Space Complexity: It is the maximum storage space required at any point during the
search, as the complexity of the problem.
Types of search algorithms
Based on the search problems we can classify the search algorithms into
uninformed search strategies (Blind search) and Heuristic search strategies
(Informed search) algorithms.
The uninformed search does not contain any domain knowledge such as
closeness, the location of the goal. It operates in a brute-force way as it only includes
information about how to traverse the tree and how to identify leaf and goal nodes.
Uninformed search applies a way in which search tree is searched without any
information about the search space like initial state operators and test for the goal, so
it is also called blind search.It examines each node of the tree until it achieves the goal
node.
o Breadth-first search
o Dijkstra’s algorithm or Uniform cost search
o Depth-first search and problem of memory
o Depth- limited and Iterative deepening search
o Bidirectional Search
A heuristic is a way which might not always be guaranteed for best solutions
but guaranteed to find a good solution in reasonable time.
Informed search can solve much complex problem which could not be solved in
another way.
Example:
In the below tree structure, we have shown the traversing of the tree using BFS
algorithm from the root node S to goal node K. BFS search algorithm traverse in
layers, so it will follow the path which is shown by the dotted arrow, and the traversed
path will be:
S---> A--->B---->C--->D---->G--->H--->E---->F---->I---->K
Space Complexity: Space complexity of BFS algorithm is given by the Memory size
of frontier which is O(bd).
Completeness: BFS is complete, which means if the shallowest goal node is at some
finite depth, then BFS will find a solution.
Optimality: BFS is optimal if path cost is a non-decreasing function of the depth of
the node.
Advantages:
Disadvantages:
o It requires lots of memory since each level of the tree must be saved into
memory to expand the next level.
o BFS needs lots of time if the solution is far away from the root node.
Example:
Completeness:
Uniform-cost search is complete, such as if there is a solution, UCS will find it.
Time Complexity:
Let C* is Cost of the optimal solution, and ε is each step to get closer to the
goal node. Then the number of steps is = C*/ε+1. Here we have taken +1, as we start
from state 0 and end to C*/ε.
The same logic is for space complexity so, the worst-case space complexity of
Uniform-cost search is O(b1 + [C*/ε]).
Optimal:
Uniform-cost search is always optimal as it only selects a path with the lowest
path cost.
Advantages:
o Uniform cost search is optimal because at every state the path with the least
cost is chosen.
Disadvantages:
o It does not care about the number of steps involve in searching and only
concerned about path cost. Due to which this algorithm may be stuck in an
infinite loop.
o Depth-first search isa recursive algorithm for traversing a tree or graph data
structure.
o It is called the depth-first search because it starts from the root node and
follows each path to its greatest depth node before moving to the next path.
o DFS uses a stack data structure for its implementation.
o The process of the DFS algorithm is similar to the BFS algorithm.
Note: Backtracking is an algorithm technique for finding all possible solutions using
recursion.
Example:
It will start searching from root node S, and traverse A, then B, then D and E,
after traversing E, it will backtrack the tree as E has no other successor and still goal
node is not found. After backtracking it will traverse node C and then G, and here it
will terminate as it found goal node.
Completeness: DFS search algorithm is complete within finite state space as it will
expand every node within a limited search tree.
Time Complexity: Time complexity of DFS will be equivalent to the node traversed
by the algorithm. It is given by:
Where, m= maximum depth of any node and this can be much larger than d
(Shallowest solution depth)
Space Complexity: DFS algorithm needs to store only single path from the root node,
hence space complexity of DFS is equivalent to the size of the fringe set, which
is O(bm).
Advantage:
o DFS requires very less memory as it only needs to store a stack of the nodes
on the path from root node to the current node.
o It takes less time to reach to the goal node than BFS algorithm (if it traverses
in the right path).
Disadvantage:
o There is the possibility that many states keep re-occurring, and there is no
guarantee of finding the solution.
o DFS algorithm goes for deep down searching and sometime it may go to the
infinite loop.
4. Depth-Limited Search Algorithm:
o Standard failure value: It indicates that problem does not have any solution.
o Cutoff failure value: It defines no solution for the problem within a given
depth limit.
Advantages:
Disadvantages:
Example:
Completeness: DLS search algorithm is complete if the solution is above the depth-
limit.
Time Complexity: Time complexity of DLS algorithm is O(bℓ).
Optimal: Depth-limited search can be viewed as a special case of DFS, and it is also
not optimal even if ℓ>d.
The iterative deepening algorithm is a combination of DFS and BFS algorithms. This
search algorithm finds out the best depth limit and does it by gradually increasing the
limit until a goal is found.
This algorithm performs depth-first search up to a certain "depth limit", and it keeps
increasing the depth limit after each iteration until the goal node is found.
This Search algorithm combines the benefits of Breadth-first search's fast search and
depth-first search's memory efficiency.
The iterative search algorithm is useful uninformed search when search space is large,
and depth of goal node is unknown.
Advantages:
o It combines the benefits of BFS and DFS search algorithm in terms of fast
search and memory efficiency.
Disadvantages:
o The main drawback of IDDFS is that it repeats all the work of the previous
phase.
Example:
Following tree structure is showing the iterative deepening depth-first search. IDDFS
algorithm performs various iterations until it does not find the goal node. The iteration
performed by the algorithm is given as:
1'st Iteration-----> A
2'nd Iteration----> A, B, C
3'rd Iteration------>A, B, D, E, C, F, G
4'th Iteration------>A, B, D, H, I, E, C, F, K, G
In the fourth iteration, the algorithm will find the goal node.
Completeness:
Time Complexity:
Let's suppose b is the branching factor and depth is d then the worst-case time
complexity is O(bd).
Space Complexity:
Optimal:
IDDFS algorithm is optimal if path cost is a non- decreasing function of the depth of
the node.
6. Bidirectional Search Algorithm:
Bidirectional search algorithm runs two simultaneous searches, one form initial state
called as forward-search and other from goal node called as backward-search, to find
the goal node. Bidirectional search replaces one single search graph with two small
sub graphs in which one starts the search from an initial vertex and other starts from
goal vertex. The search stops when these two graphs intersect each other.
Bidirectional search can use search techniques such as BFS, DFS, DLS, etc.
Advantages:
Disadvantages:
Example:
In the below search tree, bidirectional search algorithm is applied. This algorithm
divides one graph/tree into two sub-graphs. It starts traversing from node 1 in the
forward direction and starts from goal node 16 in the backward direction.
So far we have talked about the uninformed search algorithms which looked
through search space for all possible solutions of the problem without having any
additional knowledge about search space. But informed search algorithm contains an
array of knowledge such as how far we are from the goal, path cost, how to reach to
goal node, etc. This knowledge help agents to explore less to the search space and
find more efficiently the goal node.
The informed search algorithm is more useful for large search space. Informed
search algorithm uses the idea of heuristic, so it is also called Heuristic search.
Here h(n) is heuristic cost, and h*(n) is the estimated cost. Hence heuristic cost
should be less than or equal to the estimated cost.
On each iteration, each node n with the lowest heuristic value is expanded and
generates all its successors and n is placed to the closed list. The algorithm continues
unit a goal state is found.
1. Greedy Best First Search
Greedy best-first search algorithm always selects the path which appears best
at that moment. It is the combination of depth-first search and breadth-first search
algorithms. It uses the heuristic function and search. Best-first search allows us to take
the advantages of both algorithms. With the help of best-first search, at each step, we
can choose the most promising node. In the best first search algorithm, we expand the
node which is closest to the goal node and the closest cost is estimated by heuristic
function, i.e.
f(n)= h(n).
Example:
Consider the below search problem, and we will traverse it using greedy best-
first search. At each iteration, each node is expanded using evaluation function
f(n)=h(n) , which is given in the below table.
In this search example, we are using two lists which
are OPEN and CLOSED Lists. Following are the iteration for traversing the above
example.
Time Complexity: The worst case time complexity of Greedy best first search is
O(bm).
Space Complexity: The worst case space complexity of Greedy best first search is
O(bm). Where, m is the maximum depth of the search space.
Complete: Greedy best-first search is also incomplete, even if the given state space is
finite.
Advantages:
o Best first search can switch between BFS and DFS by gaining the advantages
of both the algorithms.
o This algorithm is more efficient than BFS and DFS algorithms.
Disadvantages:
o It can behave as an unguided depth-first search in the worst case scenario.
o It can get stuck in a loop as DFS.
o This algorithm is not optimal.
2. A* SEARCH ALGORITHM:
In A* search algorithm, we use search heuristic as well as the cost to reach the
node. Hence we can combine both costs as following, and this sum is called as
a fitness number.
Algorithm of A* search:
Step 2: Check if the OPEN list is empty or not, if the list is empty then return failure
and stops.
Step 3: Select the node from the OPEN list which has the smallest value of evaluation
function (g+h), if node n is goal node then return success and stop, otherwise
Step 4: Expand node n and generate all of its successors, and put n into the closed list.
For each successor n', check whether n' is already in the OPEN or CLOSED list, if not
then compute evaluation function for n' and place into Open list.
Step 5: Else if node n' is already in OPEN and CLOSED, then it should be attached to
the back pointer which reflects the lowest g(n') value.
Example:
In this example, we will traverse the given graph using the A* algorithm. The
heuristic value of all states is given in the below table so we will calculate the f(n) of
each state using the formula f(n)= g(n) + h(n), where g(n) is the cost to reach any
node from start state.
Here we will use OPEN and CLOSED list.
Solution:
Iteration3: {(S--> A-->C--->G, 6), (S--> A-->C--->D, 11), (S--> A-->B, 7), (S-->G,
10)}
o Admissible: the first condition requires for optimality is that h(n) should be an
admissible heuristic for A* tree search. An admissible heuristic is optimistic in
nature.
o Consistency: Second required condition is consistency for only A* graph-
search.
If the heuristic function is admissible, then A* tree search will always find the least
cost path.
Advantages:
o A* search algorithm is the best algorithm than other search algorithms.
o A* search algorithm is optimal and complete.
o This algorithm can solve very complex problems.
Disadvantages:
o It does not always produce the shortest path as it mostly based on heuristics
and approximation.
o A* search algorithm has some complexity issues.
o The main drawback of A* is memory requirement as it keeps all generated
nodes in the memory, so it is not practical for various large-scale problems.
Example
The path from A through B, E-F is better with a total cost of (17+1=18). Thus we
can see that to search an AND-OR graph, the following three things must be
done.
Traverse the graph starting at the initial node and following the current best path,
and accumulate the set of nodes that are on the path and have not yet been
expanded.
Pick one of these best unexpanded nodes and expand it. Add its successors to the
graph and compute f ‘ (cost of the remaining distance) for each of them.
Change the f ‘ estimate of the newly expanded node to reflect the new
information produced by its successors. Propagate this change backward through
the graph. Decide which of the current best path.
Advantages of AO*:
It is Complete
Will not go in infinite loop
Less Memory Required
Disadvantages of AO*:
It is not optimal as it does not explore all the path once it find a solution.
LOCAL SEARCH AND OPTIMIZATION PROBLEMS
Local search
Local search algorithms operate by searching from a start state to neighboring
states,
without keeping track of the paths, nor the set of states that have been reached. That
means they are not systematic—they might never explore a portion of the search
space where a solution actually resides.
However, they have two key advantages:
(1) they use very little memory; and
(2) they can often find reasonable solutions in large or infinite state spaces for which
systematic algorithms are unsuitable.
Optimization problem
Local search algorithms can also solve optimization problems, in which the
aim is to find the best state according to an objective function.
State-space landscape
The state-space landscape is a graphical representation of the hill-climbing
algorithm which is showing a graph between various states of algorithm and objective
function/cost.
To understand local search, consider the states of a problem laid out in a state-
space Landscape. Each point (state) in the landscape has an “elevation,” defined by
the value of the objective function. If elevation corresponds to an objective function,
then the aim is to find the highest peak—a global maximum—and we call the
process hill climbing. If elevation corresponds to cost, then the aim is to find the
lowest valley—a global minimum—and we call it gradient descent.
1. Hill-climbing search
It keeps track of one current state and on each iteration moves to the
neighboring state with highest value—that is, it heads in the direction that provides
the steepest ascent. It terminates when it reaches a “peak” where no neighbor has a
higher value.
Hill climbing is sometimes called greedy local search because it grabs a good
neighbor state without thinking ahead about where to go next.
LOCAL MAXIMA: A local maximum is a peak that is higher than each of its
neighboring states but lower than the global maximum. Hill-climbing algorithms that
reach the vicinity of a local maximum will be drawn upward toward the peak but will
then be stuck with nowhere else to go.
PLATEAUS: A plateau is a flat area of the state-space landscape. It can be a flat local
maximum, from which no uphill exit exists, or a shoulder
2. Simulated annealing
A hill-climbing algorithm that never makes “downhill” moves toward states with
lower value (or higher cost) is always vulnerable to getting stuck in a local maximum.
In contrast, a purely random walk that moves to a successor state without concern for
the value will eventually stumble upon the global maximum, but will be extremely
inefficient. Therefore, it seems reasonable to try to combine hill climbing with a
random walk in a way that yields both efficiency and completeness.
The overall structure of the simulated-annealing algorithm is similar to hill climbing.
Instead of picking the best move, however, it picks a random move. If the move
improves the situation, it is always accepted. Otherwise, the algorithm accepts the
move with some probability less than 1.
Start State: C
Goal State: Z and L
n=2 (beam count)
Step 5: OPEN= { }
The open set becomes empty by finding the goal node. C->O->I->Z
4. Evolutionary algorithm or Genetic algorithm
Genetic algorithm (GAs) are a class of search algorithms designed on the
natural evolution process. Genetic Algorithms are based on the principles of survival
of the fittest. The genetic algorithm then manipulates the most promising
chromosomes searching for improved solutions
Fitness Function
In every iteration, the individuals are evaluated based on their fitness scores
which are computed by the fitness function. Individuals who achieve a better fitness
score represent better solutions and are more likely to be chosen to crossover and
passed on to the next generation.For example, if genetic algorithms are used for
feature selection, then the accuracy of the model with those selected features would be
the fitness function if it is a classification problem.
Selection
After calculating the fitness of every individual in the population, a selection
process is used to determine which of the individuals in the population will get to
reproduce and create the offspring that will form the next generation.
Types of selection methods available,
Crossover
Generally, two individuals are chosen from the current generation and their
genes are interchanged between two individuals to create a new individual
representing the offspring. This process is also called mating or crossover.
Mutation
1. Initial population
2. Fitness function
3. Selection
4. Crossover
5. Mutation
Advantages of Genetic Algorithms
Parallelism
Global optimization
Probabilistic in nature
Hyper-parameter tuning
Computational complexity
Feature Selection
Adversarial search is basically a kind of search in which one can trace the
movement of an enemy or opponent.
GAME PLAYING
ALPHA-BETA PRUNING
GAME PLAYING
1.Optimal Strategies
Games require rules, legal moves and the conditions of winning or losing the
game.
Given a game tree, the optimal strategy can be determined from the minimax
value of each node, which we write as MINIMAX(n).
Apply these definitions to the game tree in Figure 5.2. The terminal nodes on
the bottom level get their utility values from the game’s UTILITY function.
2.Mini-Max Algorithm in Artificial Intelligence
Step 2: Now, first we find the utilities value for the Maximizer, its initial value is -∞,
so we will compare each value in terminal state with initial value of Maximizer and
determines the higher nodes values. It will find the maximum among the all.
o For node D max(-1,- -∞) => max(-1,4)= 4
o For Node E max(2, -∞) => max(2, 6)= 6
o For Node F max(-3, -∞) => max(-3,-5) = -3
o For node G max(0, -∞) = max(0, 7) = 7
Step 3: In the next step, it's a turn for minimizer, so it will compare all nodes value
with +∞, and will find the 3rd layer node values.
That was the complete workflow of the minimax two player game.
Player C has two choices lead to terminal states with utility vectors ⟨ vA =1,vB
=2,vC =6⟩ and
⟨ vA =4,vB =2,vC =3⟩ . Since 6 is bigger than 3, C should choose the first move.
Multiplayer games usually involve alliances, among the players.
Alliances are made and broken as the game proceeds.
ALPHA-BETA PRUNING
1. α>=β
Key points about alpha-beta pruning:
o The Max player will only update the value of alpha.
o The Min player will only update the value of beta.
o While backtracking the tree, the node values will be passed to upper nodes
instead of values of alpha and beta.
o We will only pass the alpha, beta values to the child nodes.
Let's take an example of two-player search tree to understand the working of Alpha-
beta pruning
Step 1: At the first step the, Max player will start first move from node A where α= -
∞ and β= +∞, these value of alpha and beta passed down to node B where again α= -∞
and β= +∞, and Node B passes the same value to its child D.
Step 2: At Node D, the value of α will be calculated as its turn for Max. The value of
α is compared with firstly 2 and then 3, and the max (2, 3) = 3 will be the value of α at
node D and node value will also 3.
Step 3: Now algorithm backtrack to node B, where the value of β will change as this
is a turn of Min, Now β= +∞, will compare with the available subsequent nodes value,
i.e. min (∞, 3) = 3, hence at node B now α= -∞, and β= 3.
In the next step, algorithm traverse the next successor of Node B which is node E, and
the values of α= -∞, and β= 3 will also be passed.
Step 4: At node E, Max will take its turn, and the value of alpha will change. The
current value of alpha will be compared with 5, so max (-∞, 5) = 5, hence at node E
α= 5 and β= 3, where α>=β, so the right successor of E will be pruned, and algorithm
will not traverse it, and the value at node E will be 5.
Step 5: At next step, algorithm again backtrack the tree, from node B to node A. At
node A, the value of alpha will be changed the maximum available value is 3 as max
(-∞, 3)= 3, and β= +∞, these two values now passes to right successor of A which is
Node C.
At node C, α=3 and β= +∞, and the same values will be passed on to node F.
Step 6: At node F, again the value of α will be compared with left child which is 0,
and max(3,0)= 3, and then compared with right child which is 1, and max(3,1)= 3 still
α remains 3, but the node value of F will become 1.
Step 7: Node F returns the node value 1 to node C, at C α= 3 and β= +∞, here the
value of beta will be changed, it will compare with 1 so min (∞, 1) = 1. Now at C,
α=3 and β= 1, and again it satisfies the condition α>=β, so the next child of C which is
G will be pruned, and the algorithm will not compute the entire sub-tree G.
Step 8: C now returns the value of 1 to A here the best value for A is max (3, 1) = 3.
Following is the final game tree which is the showing the nodes which are computed
and nodes which has never computed. Hence the optimal value for the maximizer is 3
for this example.
o Worst ordering: In some cases, alpha-beta pruning algorithm does not prune
any of the leaves of the tree, and works exactly as minimax algorithm. In this
case, it also consumes more time because of alpha-beta factors, such a move
of pruning is called worst ordering. In this case, the best move occurs on the
right side of the tree. The time complexity for such an order is O(bm).
o Ideal ordering: The ideal ordering for alpha-beta pruning occurs when lots of
pruning happens in the tree, and best moves occur at the left side of the tree.
We apply DFS hence it first search left of the tree and go deep twice as
minimax algorithm in the same amount of time. Complexity in ideal ordering
is O(bm/2).
CONSTRAINT SATISFACTION PROBLEMS (CSP)
A problem is solved when each variable has a value that satisfies all the constraints on
the variable. A problem described this way is called a constraint satisfaction
problem, or CSP.
X = {WA,NT,Q,NSW,V ,SA,T}.
It can be helpful to visualize a CSP as a constraint graph. The nodes of the graph
correspond to variables of the problem, and an edge connects any two variables that
participate in a constraint.