0% found this document useful (0 votes)
8 views6 pages

ALGORITHMS

The document outlines various algorithms used in artificial intelligence, including Hill Climbing, A*, Best-First Search, Min-Max, and Alpha-Beta Pruning. Each algorithm is explained with an introduction, working process, real-life examples, and advantages, highlighting their applications in optimization and decision-making. The document emphasizes the iterative nature of these algorithms and their effectiveness in solving complex problems efficiently.

Uploaded by

Ansh Adarsh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views6 pages

ALGORITHMS

The document outlines various algorithms used in artificial intelligence, including Hill Climbing, A*, Best-First Search, Min-Max, and Alpha-Beta Pruning. Each algorithm is explained with an introduction, working process, real-life examples, and advantages, highlighting their applications in optimization and decision-making. The document emphasizes the iterative nature of these algorithms and their effectiveness in solving complex problems efficiently.

Uploaded by

Ansh Adarsh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 6

ALGORITHMS

1.>Hill Climbing Search Algo


 Introduction (2 marks):

Hill Climbing is a heuristic search algorithm used in AI for optimization problems. It's an
iterative algorithm that starts with an arbitrary solution to a problem and attempts to find a
better solution by making incremental changes to the solution. Ex:- Imagine you're playing a
game where you need to help a little robot climb a mountain to find treasure! Hill Climbing in
AI is just like that - it's a way to help computers solve problems by always trying to go "up"
towards better answers.

 Working Process (3 marks):

Step 1: Initialize with a random solution (starting point). Starting Point: Imagine you put your
robot anywhere on the mountain. That's your starting point!

Step 2: Evaluate all neighboring states . The robot looks around at all the spots it can step to
(just like you would look around before climbing).

Step 3: If a neighbor has a better value than current state, move to that state. Taking Steps:

 If the robot sees a higher spot, it moves there

 If all spots around are lower, it stays where it is

 Just like you would always try to climb up, never down!

Step 4: If no better neighbor exists, stop (reached local/global maximum)

Step 5: Repeat steps 2-4 until no better solution is found

> Types of Problems Robot Faces (3 marks):

1. The Stuck Problem: Sometimes the robot reaches a small hill and thinks it's the highest
point, but there might be a bigger hill nearby! (This is called getting stuck at a local
maximum)

2. The Flat Ground Problem: Sometimes the robot finds flat ground and doesn't know
which way to go (This is called the plateau problem)

3. The Zigzag Problem: Sometimes the robot has to zigzag left and right to go up, which can
be confusing! (This is called the ridge problem).

Real-Life Example (2 marks): Think of playing a video game where you need to get the highest score:

 You try different moves (like the robot taking steps)

 If your score goes up, you keep doing similar moves

 If your score goes down, you try something else

 You stop when you can't get a higher score


2> A* Algorithm
Introduction (2 marks):

Think of A* like a really smart GPS finding the shortest path from your home to a toy store. It's
smarter than other algorithms because it:

 Looks at the distance already traveled

 Guesses remaining distance

 Adds these together to make smart decisions

How A* Works (3 marks):

1. Starting Point:

 Like putting your finger on "Home" on a map

 Keeps a list of places to check (open list)

 Keeps track of places already checked (closed list)

2. Making Decisions:

 f(n) = g(n) + h(n)

o g(n): Distance traveled so far

o h(n): Estimated distance to goal

o f(n): Total expected distance

3. Moving Forward:

 Always picks the path with lowest f(n)

 Like choosing the shortest-looking route

Real Example (2 marks): Finding path from Delhi to Mumbai:


 g(n): Distance already traveled from Delhi

 h(n): Straight-line distance to Mumbai

 Keeps choosing cities that minimize total expected distance

Advantages (2 marks):

1. Smart:

 Uses both past and future information

 Better than just looking ahead or behind

2. Complete:

 Always finds a path if one exists

 Finds the shortest path if h(n) never overestimates

Formula for Exam: f(n) = g(n) + h(n) Where:

 n = current node

 g(n) = actual cost

 h(n) = estimated cost

 f(n) = total cost

3> Best First Search


1. Introduction (2 marks)

Best-First Search finds paths like a hiker who always walks toward the mountain peak they
can see, choosing the most promising direction each time.

2. Core Concept (2 marks)

 Uses heuristic (h) to estimate distance to goal

 Always picks most promising next step

 Only cares about estimated distance to goal, not distance traveled

3. Working Steps (4 marks) a) Start

 Put starting point in queue

 Mark it as "to explore"

b) At each step

 Pick node closest to goal

 Check if reached goal

 If not, get all possible next moves

 Add new moves to queue


c) Continue until

 Find goal, or

 Run out of places to check

4. Real Uses (2 marks)

 Video game characters finding targets

 GPS choosing promising routes

 Robot navigation

 Web crawlers following important links first

4 The Min-Max algorithm


The Min-Max algorithm is a decision-making tool used in artificial intelligence, especially
in games like chess or tic-tac-toe, to choose the best move. Here's a simple breakdown:
Imagine You're Playing a Game:
1. Two Players: You (the maximizing player) want to win, so you pick the move with the
highest score. Your opponent (the minimizing player) wants to beat you, so they
choose the move with the lowest score for you.
2. Game Tree: Think of all possible moves as a tree, where:
o The top of the tree (the root) is the current situation.
o The branches are all possible moves.
o The leaves at the bottom represent the end results of those moves.
3. Min-Maxing:
o Max's Turn: When it's your turn, you look ahead and choose the move that
gives you the maximum possible score, assuming your opponent will play
optimally to minimize your score.
o Min's Turn: When it's your opponent's turn, they choose the move that
minimizes your score, assuming you will play optimally to maximize your
score.
4. Backtracking: You go through this tree from the bottom up, calculating scores for
each possible move:
o If it's your turn, pick the move with the highest score.
o If it's your opponent's turn, assume they'll pick the move that gives you the
lowest score.
Why It's Useful:
 It helps AI make smart decisions by considering what both players will do next.
 It’s used in games and situations where you need to plan several moves ahead.
Simple Example:
 In tic-tac-toe, if you can win in one move, the AI will choose that move.
 If your opponent can win on their next move, the AI will block them.
The Min-Max algorithm ensures the AI makes the best possible decision, even if the
opponent tries to make it lose.

Alpha-Beta Pruning is a trick to make the Min-Max algorithm smarter and faster when
it’s deciding the best move in games like chess or tic-tac-toe.
Imagine You’re Playing a Game:
1. You (the player) want to make the best move.
2. The computer (your opponent) also wants to make the best move.
But, instead of checking all possible moves (which can take a long time), the computer
uses Alpha-Beta Pruning to skip some moves that it knows won’t help.
How Does It Work?
1. Alpha: This keeps track of the best score that the maximizing player (you) can
guarantee. It starts very low and gets updated as we find better options.
2. Beta: This keeps track of the best score that the minimizing player (the computer)
can guarantee. It starts very high and gets updated as we find worse options.
The Trick:
 While going through the game tree (all possible moves), if the computer finds a move
that’s obviously worse than a move it already knows about, it stops looking at the
rest of the moves. It knows those moves can’t be better.
Example:
1. You’re trying to find the best move (maximize score).
2. The computer checks the first few moves and finds a good one for itself.
3. If it finds another move later that’s worse than what it already found, it skips
checking that move further.
Why It’s Useful:
 It saves time by not checking every single move.
 It still guarantees the best move will be found, just faster.
In simple words: Alpha-Beta Pruning helps the computer skip bad moves and focus only
on the good ones, making the game decisions faster!

You might also like