0% found this document useful (0 votes)
11 views9 pages

FAI Unit2

The document discusses various search algorithms in artificial intelligence, including Random Search, Depth-First Search (DFS), Breadth-First Search (BFS), and Heuristic Search, highlighting their properties, advantages, and limitations. It emphasizes the importance of selecting the appropriate algorithm based on problem constraints and the nature of the solution space. Additionally, it briefly covers Support Vector Machine (SVM) and Principal Component Analysis (PCA) as techniques for classification and dimensionality reduction, respectively.

Uploaded by

Likhith golagani
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views9 pages

FAI Unit2

The document discusses various search algorithms in artificial intelligence, including Random Search, Depth-First Search (DFS), Breadth-First Search (BFS), and Heuristic Search, highlighting their properties, advantages, and limitations. It emphasizes the importance of selecting the appropriate algorithm based on problem constraints and the nature of the solution space. Additionally, it briefly covers Support Vector Machine (SVM) and Principal Component Analysis (PCA) as techniques for classification and dimensionality reduction, respectively.

Uploaded by

Likhith golagani
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 9

Search Algorithms in Artificial Intelligence

Introduction

Search algorithms are fundamental to problem-solving in artificial intelligence


(AI). They help in navigating through problem spaces to find solutions. This
presentation covers various search strategies, including Random Search, Search
with Closed and Open Lists, Depth-First Search (DFS), Breadth-First Search (BFS),
and Heuristic Search. Each approach has different applications, advantages, and
limitations. Understanding these methods is crucial for optimizing AI-based
search and problem-solving.

Artificial Intelligence is the study of building agents that act rationally. Most of
the time, these agents perform some kind of search algorithm in the
background in order to achieve their tasks.

A search problem consists of:


o A State Space. Set of all possible states where you can be.
o A Start State. The state from where the search begins.
o A Goal State. A function that looks at the current state returns
whether or not it is the goal state.

Other Terminology,

o Search tree: A tree representation of search problem is called Search tree.


The root of the search tree is the root node which is corresponding to the
initial state.
o Actions: It gives the description of all the available actions to the agent.
o Transition model: A description of what each action do, can be
represented as a transition model.
o Path Cost: It is a function which assigns a numeric cost to each path.
o Solution: It is an action sequence which leads from the start node to the
goal node.
o Optimal Solution: If a solution has the lowest cost among all solutions.

The Solution to a search problem is a sequence of actions, called the plan that
transforms the start state to the goal state.

This plan is achieved through search algorithms.

Properties of Search Algorithms:

Completeness: A search algorithm is said to be complete if it guarantees to


return a solution if at least any solution exists for any random input.
Optimality: If a solution found for an algorithm is guaranteed to be the best
solution (lowest path cost) among all other solutions, then such a solution for is
said to be an optimal solution.

Time Complexity: Time complexity is a measure of time for an algorithm to


complete its task.

Space Complexity: It is the maximum storage space required at any point


during the search, as the complexity of the problem.

Here are the some Search algorithms,

1. Random Search

Random search is a simplistic approach where possible solutions are generated


randomly until a goal state is reached. It is inefficient but useful in cases where
no heuristics or structured knowledge about the problem exists.

How It Works:

 Generates random possible solutions without any specific direction.


 Checks each randomly generated solution against the goal condition.
 Repeats until a valid solution is found.

Example with Diagram:

Consider finding a hidden treasure on a grid-based map. In a random search, an


explorer would randomly move in any direction without following a structured
path, checking each cell until the treasure is found.

Grid Representation:
[S] - - - -
- X---
- - - -
- - - [T]

(S = Start, T = Treasure, X = Random visit)

This method lacks efficiency since the explorer does not use prior knowledge of
the map.

Pros:

 Simple to implement.
 Can explore unconventional solutions.
 Useful when no domain knowledge is available.
Cons:

 Inefficient and slow.


 No guarantee of finding an optimal solution.

2. Search with Closed and Open Lists

This approach improves efficiency by maintaining two lists:

 Open List: Stores nodes that need to be explored.


 Closed List: Keeps track of visited nodes to avoid redundant searches.

How It Works:

 The algorithm starts by adding the initial node to the open list.
 A node is picked from the open list, expanded, and moved to the closed
list.
 The process continues until the goal is found or no nodes remain in the
open list.

Example with Diagram:

In a maze-solving problem, the open list contains unexplored junctions, while the
closed list maintains visited paths, preventing backtracking.

Maze Representation:
[S] - [O] - [T]
- [O] - [X]
- [X] - [X]
- - - -

(S = Start, T = Target, O = Open List, X = Closed List)

Pros:

 Prevents cycles and redundant calculations.


 Optimizes search space traversal.
 Improves efficiency over naive approaches.

Cons:

 Requires additional memory to store lists.


 Can be computationally expensive for large problems.

3. Depth-First Search (DFS)

DFS explores as far as possible along each branch before backtracking.

How It Works:
 The algorithm starts at the root node.
 It explores a path as deep as possible before backtracking.
 If a dead-end is reached, the algorithm backtracks to the previous node.

Example with Diagram:

Consider a tree structure where DFS explores a branch deeply before moving to
another branch.

(A)
/ \
(B) (C)
/ \ \
(D) (E) (F)

DFS traversal: A → B → D → E → C → F

Pros:

 Uses less memory compared to BFS.


 Efficient for deep solutions.
 Works well when the solution is located deep in the search tree.

Cons:

 Can get stuck in infinite loops if cycles exist.


 Not optimal as it may not find the shortest path.
 May traverse unnecessary nodes.

Example:

Traversing nodes in a given tree using DFS.


4. Breadth-First Search (BFS)

BFS explores all neighboring nodes before moving to the next level.

How It Works:

 The algorithm starts at the root node.


 It explores all direct child nodes before moving to the next level.
 This process continues until the goal node is found.

Example with Diagram:

For the same tree structure used in DFS:

(A)
/ \
(B) (C)
/ \ \
(D) (E) (F)

BFS traversal: A → B → C → D → E → F

Pros:

 Guarantees the shortest path in an unweighted graph.


 Systematic search method.
 Useful for exploring all possible solutions at shallow levels.

Cons:

 Requires more memory compared to DFS.


 Inefficient for deep graphs.
 Can be slow if the search space is large.

Example :

Tree traversal using BFS step by step


5. Heuristic Search

Heuristic search algorithms use problem-specific knowledge to guide the search


towards the goal efficiently.

How It Works:

 A heuristic function estimates the cost or distance to the goal.


 The algorithm prioritizes nodes based on this heuristic.
 It aims to minimize search time by focusing on promising paths.

Example with Diagram:

A* Search applied to a grid where H represents heuristic values estimating the


distance to the goal:

S(5) - A(4) - B(2) - T(0)


| |
C(6) - D(3) - E(1)

Here, A* chooses paths based on f(n) = g(n) + h(n), where g(n) is the path cost
and h(n) is the heuristic estimate.

Examples:

 Greedy Best-First Search: Expands the most promising node based on a


heuristic function, such as choosing a city closest to the destination in a
route-planning problem.
 A Search:* Combines path cost and heuristic estimation to find an optimal
path.

For example, in Google Maps, A* Search calculates the shortest


route considering both distance and traffic conditions.

 Hill Climbing: Used in optimization problems where the algorithm


continuously moves toward higher-valued solutions, such as in AI-based
chess engines.

Pros:

 Reduces the number of explored nodes.


 More efficient than uninformed searches.
 Can be tailored to specific problem domains using heuristics.

Cons:

 Requires a good heuristic function for efficiency.


 May not always be optimal if the heuristic is inaccurate.
 Susceptible to getting stuck in local optima (e.g., in Hill Climbing).
Conclusion

Different search strategies have their advantages and drawbacks. The choice of
algorithm depends on the problem constraints such as memory, execution time,
and the nature of the solution space.

 Random Search is useful when no structured approach is available.


 Search with Open/Closed Lists optimizes search efficiency.
 DFS is memory-efficient but may not find the shortest path.
 BFS guarantees the shortest path but consumes more memory.
 Heuristic Search improves efficiency but depends on a well-defined
heuristic.

Each approach is suitable for specific applications, such as path finding, game AI,
and decision-making systems. Selecting the right algorithm is crucial for solving
problems efficiently.

Support Vector Machine (SVM)

Overview

Support Vector Machine (SVM) is a supervised learning algorithm primarily used for
classification and regression tasks. It finds an optimal hyperplane that best separates
different classes in a dataset.

Key Concepts

Hyperplane: A decision boundary that separates different classes.

Support Vectors: Data points closest to the hyperplane, which influence its
position and orientation.

Margin: The distance between the hyperplane and the nearest data points
from either class. SVM aims to maximize this margin.

Kernel Trick: Enables SVM to handle non-linear data by transforming it into


higher-dimensional space.

Types of SVM

Linear SVM: Used when data is linearly separable.

Non-Linear SVM: Uses kernel functions like polynomial, radial basis


function (RBF), and sigmoid to classify complex data.
Example

Consider a dataset with two categories: spam and non-spam emails.

SVM analyzes features like word frequency and classifies emails into these two
categories by finding the optimal hyperplane that best separates them.

Principal Component Analysis (PCA)

Overview

Principal Component Analysis (PCA) is an unsupervised dimensionality reduction


technique that transforms a dataset into a lower-dimensional space while preserving
as much variance as possible.

Key Concepts

Variance: A measure of data spread. PCA seeks to retain the most significant
variations.

Eigenvalues and Eigenvectors: Eigenvalues determine the importance of


principal components, while eigenvectors define their direction.

Principal Components: New orthogonal axes that maximize variance in the


data.

Dimensionality Reduction: Reducing the number of features while retaining


important information.
Steps in PCA

Standardize the dataset.

Compute the covariance matrix.

Calculate eigenvalues and eigenvectors.

Select the top principal components.

Transform the original data to the new feature space.

Example

Consider a dataset with multiple features (e.g., customer purchasing behavior across
multiple products). PCA reduces the dimensions to a few principal components,
allowing for easier visualization and efficient model training.

Comparison: SVM vs. PCA

Feature SVM PCA


Type Supervised Learning Unsupervised Learning
Purpose Classification & Regression Dimensionality Reduction
Key
Hyperplane & Support Vectors Principal Components
Component
Works well with both linear and Reduces data complexity for
Handling Data
non-linear data better efficiency

Conclusion

SVM is a powerful classification tool that finds an optimal hyperplane for separating
data, while PCA is useful for reducing data complexity while retaining essential
features. Combining these techniques can lead to more efficient and effective AI
models.

You might also like