0% found this document useful (0 votes)
11 views

Algorithms and Complexity

The document discusses algorithms and complexity related to graphs. It defines key graph components like nodes and edges, and data structures for representing graphs like adjacency matrices, adjacency lists, and edge lists. It also discusses graph algorithms for searching graphs like depth-first search and breadth-first search, as well as algorithms for finding connected components in graphs.

Uploaded by

Dan Curate
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views

Algorithms and Complexity

The document discusses algorithms and complexity related to graphs. It defines key graph components like nodes and edges, and data structures for representing graphs like adjacency matrices, adjacency lists, and edge lists. It also discusses graph algorithms for searching graphs like depth-first search and breadth-first search, as well as algorithms for finding connected components in graphs.

Uploaded by

Dan Curate
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 47

ALGORITHMS AND

COMPLEXITY
DUNGOG, ERLAN JUSTIN C.
CALUCOP, JAMIL
LERIO, VINCE JIMWEL

2023
Algorithm and Complexity

GRAPHS
DEFINITION

PROVIDES A RICH FRAMEWORK FOR MODELING


AND SOLVING A WIDE RANGE OF REAL-WORLD
PROBLEMS.

2023
KEY COMPONENTS OF GRAPH:

Nodes
These are the fundamental elements of a
graph, representing individual entities or
points.

Edges
are the connections between nodes and
represent relationships or interactions
between the corresponding entities.

Weight
is assigned a numerical value (weight) that
represents some measure of the strength
or cost.

Algorithm and Complexity


GRAPHS

IMPLEMENTATION
To understand the computational and complexity
that involves in appropriate data structures and
designing algorithms.

Algorithm and Complexity


ADJACENCY MATRIX

Algorithm and Complexity


is a 2D array where each element ‘matrix[i][j]’
represents the presence or absence of an edge
between nodes ‘i’ and ‘j’.

Pros Cons

Easy to understand and implement. Consumes more memory for large graphs.
ADJACENCY LIST

Algorithm and Complexity


An adjacency list represents the graph as an array
of lists or linked lists.

Pros Cons

Consumes less memory for large graphs. Less efficient for checking the presence of an
edge between two nodes in dense graphs.
EDGE LIST

Algorithm and Complexity


is an array of tuples, where each tuple represents
an edge in the graph. Each tuple typically contains
information about the two nodes and, for weighted
graphs, the weight of the edge.

Pros Cons

Simple to implement. Less efficient for many graph operations


compared to adjacency lists or matrices.
GRAPH ALGORITHMS

Algorithm and Complexity


Various graph algorithms can be implemented to
solve specific problems.
GRAPHS
Liceria & Co.

SEARCHING
refers to the process of finding
specific elements or patterns within
a graph data structure.

Algorithm and Complexity 04


Liceria & Co.

DEPTH-FIRST SEARCH (DFS)


DFS is a graph traversal algorithm
that explores as far as possible
along each branch before
backtracking.

Algorithm and Complexity 04


Liceria & Co.

BREADTH-FIRST SEARCH (BFS)


BFS is a graph traversal algorithm
that explores all the neighbors of a
node before moving on to their
neighbors.

Algorithm and Complexity 04


CONNECTED COMPONENTS
IN ALGORITHM
Lerio, Vince Jimwel M.
CONNECTED COMPONENTS
A connected component in an undirected graph is a maximal subgraph in
which any two vertices are connected by a path. "Maximal" means that you
cannot add any more vertices from the graph to the component without
breaking the property of connectivity.

There are several algorithms to find connected components in a graph. Two


of the most common approaches is Depth-First Search (DFS) and Breadth-
First Search (BFS). These algorithms start at a vertex and explore all
connected vertices, marking them as part of the same component. The
process is repeated for any unvisited vertices until all components are
found.
PROPERTIES OF CONNECTED COMPONENTS:
1. Connectivity: All vertices within a connected component are connected to each other, meaning there is a path
between any pair of vertices within the component. There is no path connecting vertices from one component
to vertices in another component.
2. Uniqueness: Each connected component in a graph or image is unique, meaning it is a maximal subset of
vertices with the property of connectivity. In other words, you cannot combine two different connected
components into a larger one by adding more vertices while maintaining connectivity.
3. Size: Connected components can vary in size. Some may contain only a single vertex, while others may contain
a large number of vertices. The size of a connected component is determined by the number of elements it
contains.
4. Vertex or Pixel Labels: In the context of graphs, you can assign labels to connected components. For example,
you can number them or give them arbitrary names. In image processing, connected components are often
labeled with unique identifiers, and these labels are used for various purposes, such as segmentation.
5. Graph Representation: In a graph, connected components can be represented as subgraphs, where each
subgraph is a connected component.
6. Applications: Connected components are used in various applications, such as image segmentation, network
analysis, and component labeling in data structures. They help in identifying and isolating distinct regions or
groups within a larger system.
7. Connectivity Analysis: The study of connected components can provide insights into the overall structure and
organization of a graph or an image. For example, it can help identify isolated clusters, bridge nodes, or
regions of interest.
8. Algorithms: There are efficient algorithms for finding connected components in both graphs and images. In
graphs, depth-first search (DFS) and breadth-first search (BFS) are commonly used algorithms for this
purpose.
Connected components are used in various applications, such as:

Network analysis: Identifying clusters or communities in social networks,


communication networks, or transportation networks.

Image processing: In image segmentation, connected components can


help identify regions with similar characteristics.

Component-based software engineering: Identifying modules or


components in software systems.

Database systems: Detecting connected components can be useful for


analyzing relationships in data.
DEPTH-FIRST SEARCH AND BREADTH-FIRST SEACH
Depth-First Search (DFS) and Breadth-First Search (BFS) are two fundamental
graph traversal algorithms used in computer science to explore and search
through graph data structures.

DEPTH-FIRST SEARCH BREADTH-FIRST SEARCH


DFS is a graph traversal algorithm BFS is a graph traversal algorithm that explores all
that explores as far as possible the neighbor nodes at the current depth before
moving on to the nodes at the next depth level.
along a branch before backtracking.
It starts at an initial node and explores all its
It starts at an initial node, explores neighbors before proceeding to their neighbors.
as far as possible along each branch
before backtracking. BFS uses a queue data structure to keep track of
nodes to visit.
DFS uses a stack (either explicitly or
BFS is often used to find the shortest path between
through recursive calls) to keep two nodes in an unweighted graph or to explore a
track of the nodes to visit. graph level by level.
DEPTH-FIRST SEARCH
Depth-First Search (DFS) is a graph traversal algorithm used in computer
science to explore or search through the nodes of a graph. It is a fundamental
algorithm that helps solve various graph-related problems. DFS is called "depth-
first" because it explores as deeply as possible along each branch of the graph
before backtracking.
Depth-First Search (DFS) is commonly used for various tasks in
computer science, including:

Finding connected components: You can use DFS to find all the nodes reachable
from a given node in a connected component of a graph.

Detecting cycles: DFS can be used to detect cycles in a graph, which is useful in
tasks like topological sorting or checking for the presence of loops.

Pathfinding: You can use DFS to find paths between nodes in a graph.

Maze solving: DFS can be applied to solve mazes or puzzles by exploring different
paths until a solution is found.
STEPS IN DEPTH-FIRST SEARCH
1. Start at a source node: You begin the traversal at a starting node (or vertex) in the
graph.
2. Visit and mark the node: You visit and mark the current node as visited to keep track
of the nodes you've explored.
3. Explore unvisited neighbors: From the current node, you choose an unvisited neighbor
(adjacent node) and move to that neighbor. This neighbor becomes the new current
node.
4. Recursively repeat: You repeat steps 2 and 3 for the new current node, exploring as
deeply as possible along each branch before backtracking to explore other branches.
5. Backtrack: When there are no unvisited neighbors from the current node, you
backtrack to the previous node and continue exploring other unvisited neighbors.
6. Repeat until all nodes are visited: You continue this process until you've visited all
nodes in the graph or until you've achieved your specific goal (e.g., finding a specific
node, detecting cycles, or traversing a path).
Sample codes for DFS:

Output:
Advantage and Disadvantages of using Depth-First Search
Advantages:
Simplicity: DFS is relatively easy to implement, both using recursive and iterative approaches. It doesn't require any
additional data structures other than a stack (for the iterative version) or the call stack (for the recursive version).

Memory Efficiency: The iterative version of DFS is memory-efficient because it uses a small amount of extra memory to
maintain the stack, which typically has a smaller memory footprint compared to the queue used in Breadth-First Search
(BFS).

Space Complexity: The space complexity of DFS is often better than BFS in terms of average case performance. DFS
tends to use less memory in graphs with a large branching factor.

Path Finding: DFS is useful for finding paths in a graph, such as finding a path between two nodes or finding all paths
from a source to a destination.

Easily Modified: DFS can be easily modified or extended to solve various graph-related problems, including cycle
detection, connectivity analysis, and more.

Disadvantages:
Completeness: DFS may not find all possible paths or solutions in certain cases. It depends on the order in which nodes
are visited, and it may not explore all possible branches of the graph.

Non-Optimal Path: When used for finding the shortest path in weighted graphs, DFS is not guaranteed to find the optimal
path. It can find a path that is longer than the shortest one.

Stack Overflow (Recursion): In the recursive implementation of DFS, deep or infinite recursion can lead to a stack
overflow error, especially in graphs with deep branches.

Time Complexity: In worst-case scenarios, DFS can take exponential time, especially in graphs with many branching
possibilities, which can lead to inefficient performance.

Not Suitable for All Problems: DFS may not be the best choice for some graph problems, such as finding the shortest path
in weighted graphs, where algorithms like Dijkstra's or A* may be more appropriate.
BREADTH-FIRST SEARCH
Breadth-First Search (BFS) is a fundamental graph traversal algorithm used in
computer science and algorithms. It is primarily used for exploring and traversing
graphs or trees in a systematic manner.
STEPS IN BREADTH-FIRST SEARCH
Start at the Initial Node: Begin the traversal by selecting a starting node in the
graph or tree. This is where the exploration process will commence.

Explore Neighbors: Visit and process the current node. Explore all unvisited
neighbors of the current node before moving on to their neighbors. This ensures
that you traverse nodes at the same depth level before going deeper.

Queue Unvisited Neighbors: Add the unvisited neighbors of the current node to a
queue data structure. This step maintains a FIFO (First-In-First-Out) order for
node exploration, ensuring that nodes at the same level are processed before
moving deeper into the graph.

Dequeue and Repeat: Dequeue the front node from the queue, making it the new
current node for exploration. Repeat steps 2 and 3 for the newly selected current
node until the queue is empty.

Terminate: Once the queue is empty, the BFS traversal is complete. You have
visited all nodes reachable from the initial node
SAMPLE CODES:

OUTPUT:
Advantage and Disadvantages of using Breadth-First Search
Advantages:
Guaranteed Shortest Path: BFS guarantees that it finds the shortest path between two nodes in an unweighted graph. It explores nodes level
by level, ensuring that shorter paths are found before longer ones.

Completeness: BFS is a complete algorithm, meaning it will always find a solution if one exists. If there is a path from the source to the target,
BFS will find it.

Exploration of Neighbor Nodes: BFS explores all neighbor nodes of a given node before moving on to their neighbors. This can be useful for
certain tasks like finding connected components or detecting cycles.

Memory Efficiency: BFS can be implemented using a queue data structure, which typically uses less memory compared to some other graph
algorithms like Depth-First Search (DFS), which can use a recursive stack.

Identifying Connected Components: BFS can be used to identify and enumerate connected components in an undirected graph efficiently.

Disadvantages:
Memory Usage: While BFS is memory-efficient compared to some other algorithms, it still requires more memory than DFS in most cases
because it needs to store all nodes at the current level in a queue. This can be a drawback in graphs with a large number of nodes.

Not Suitable for Weighted Graphs: BFS is designed for unweighted graphs. It does not take edge weights into account when finding paths,
which can lead to suboptimal results in weighted graphs.

Longer Paths in Practice: In graphs with many branches and deep levels, BFS may explore a large portion of the graph before finding a path,
leading to longer execution times in practice.

Queue Management Overhead: The queue data structure used in BFS can have overhead in terms of memory and processing, especially in
highly dynamic graphs where nodes are frequently added or removed.

Doesn't Handle Cycles Well: BFS may not handle cycles efficiently in directed graphs. Without additional checks or algorithms, it can get stuck
in an infinite loop when encountering cycles.

No Solution vs. Longer Paths: BFS does not provide information about the existence of shorter paths when it finds a path. In some cases, you
may want to know if there are shorter paths in addition to the one found.
RECURSIVE PROCEDURES
IN ALGORITHM
RECURSIVE PROCEDURE

A function or subroutine that calls itself in order to solve a


problem. Recursive procedures are commonly used in
algorithm design when a problem can be broken down into
smaller, similar subproblems. Recursive procedures are
defined using a base case and a recursive case.
General outline for defining a recursive procedure:

Base Case: Define a condition that specifies when the recursion should stop. This
condition is crucial to prevent infinite recursion and ensure that the algorithm
terminates. The base case typically provides a direct answer or solution to the
smallest possible instance of the problem.

Recursive Case: Define the procedure in terms of smaller or simpler instances of


the same problem. In the recursive case, the procedure calls itself with modified
input parameters to solve a smaller subproblem. This step should eventually lead to
the base case.

Combine Results: In many recursive algorithms, you'll need to combine the results
from the recursive calls to solve the original problem. This step may involve
mathematical operations, merging data structures, or other relevant processing.
RECURSIVE VS ITERATION
Recursion: Recursive solutions are well-suited for problems with inherent
recursive structures, such as tree traversal or problems involving divide
and conquer strategies.

Iteration: Iterative solutions are suitable for problems that can be solved
with repetitive, sequential steps, such as searching through an array or
performing mathematical calculations.
Recursive Procedure for Connected Component
Often used to identify and label connected components within a graph. Connected components
are sets of vertices within a graph where each vertex is connected to every other vertex in the
same component, and there are no connections between vertices in different components.

Fibonacci Sequence
The Fibonacci sequence and recursive procedures are closely related because the Fibonacci
sequence is often used as a classic example to illustrate the concept of recursion in computer
science and mathematics. Recursive procedures are a natural way to generate the Fibonacci
sequence.

The Fibonacci sequence is defined recursively, where each number in the sequence is the sum of
the two preceding numbers. This recursive definition can be translated directly into a recursive
algorithm or procedure for generating Fibonacci numbers
Recursive Data Structure
Contains instances of the same data structure within itself. This concept is
often used in computer science and programming to solve problems where
the data can be naturally broken down into smaller, similar subproblems.

Used to implement recursive algorithms.

Recursive Backtracking
A popular technique used in computer science and algorithm design to solve problems
that involve finding solutions through a sequence of choices or decisions.
How Recursive Backtracking works in a recursive
procedure or algorithm:
Choice and Exploration: At each step of the algorithm, you have a set of choices or options
to consider. You make one choice and proceed with the exploration of that choice.

Recursion: You use a recursive procedure to explore the consequences of the chosen option.
This means that you call the same algorithm/function recursively with a modified state or
context to explore the next step.

Backtracking: If, during the exploration, you find that the current choice leads to a dead
end or doesn't satisfy the problem's constraints or conditions, you backtrack. Backtracking
means that you return from the current recursive call to the previous one and explore other
options from there.

Termination Condition: The recursion continues until you find a solution (or solutions) that
satisfy the problem's conditions, or you have explored all possible choices.
LIMITATIONS OF RECURSIVE PROCEDURE:
Stack Overflow: Recursive algorithms often rely on the call stack to manage function calls. If
the recursion depth is too deep, it can lead to a stack overflow error, causing the program to
crash. This limitation restricts the size of the input that can be processed using recursion.

Performance Overhead: Recursive function calls can introduce a performance overhead due to
the repeated creation of stack frames. This overhead can be significant for deeply nested or
large-scale recursive algorithms and may result in slower execution compared to iterative
alternatives.

Memory Usage: Each recursive call consumes memory by creating a new stack frame that stores
local variables and function call information. For problems with a high degree of recursion or
large input data, memory consumption can become a concern.

Limited Applicability: Not all problems have a natural recursive structure, and attempting to
solve them recursively may lead to convoluted and inefficient solutions. Recursive algorithms
are best suited for problems that can be naturally divided into smaller, similar subproblems.

Debugging Complexity: Recursive algorithms can be challenging to debug and understand,


especially when the recursion involves complex data structures or multiple recursive calls.
Tracking the flow of execution and identifying errors can be more difficult compared to
iterative algorithms.
PROOF BY INDUCTION
Calucop, Jamil T.
PROOF BY INDUCTION
Mathematical induction or proof by induction is a method of
mathematical proof typically used to establish statements for natural
numbers.
Its principles are to prove that P(n) is true for all positive integers n. The
process involves two steps:
Base/Basis Case: Shows that P(1) is true
Inductive Case: Show that P(k) → P(k+1) is true for all positive integers
k.
The induction hypothesis is a critical part of a mathematical
induction proof. It's a supposition or assumption made during the
inductive step of the proof.
Proof of induction is a proof technique based on the following principle
(P(1) ∧∀ kP(k) → →∀
P(k + 1)) nP(n)

PROOF BY INDUCTION
EXAMPLE:
We have an infinite number of dominos labeled 1, 2, 3….,

Let P(n) be the proposition that the nth domino is knocked over.

DOMINOES
Once we start to knock down the dominoes the P(1) will be
true

We know that if the kth domino is knocked down, it will knock over the
(k+1)st domino, or P(k)→ P(k+1). This makes it true for all positive integers
of k.

Hence all dominoes are knocked over. And P(n) is true for all positive
integers.

PROOF BY INDUCTION
Proof by Mathematical Induction:
Statement: For all positive integers n, the sum of the first n positive
integers is given by the formula.

The formula is the derived form of 1 + 2 + 3 + ... + k.

Base Case Inductive Step:


Inductive Step: P(k) Inductive Step: P(k+1):
Base Case (n = 1): For n = 1, we have
1 = 1(1 + 1)/2, which is true. So, the S (3) = 3 (3 + 1)/2
base case holds. Solution: 3=3*4/2
S (1) = 1 (1 + 1)/2 3 = 12 / 2
1=1*2/2 3=6
1=2/2
1=1

PROOF BY INDUCTION
Proof by induction is not directly used as a proof technique as it is in
mathematics. However, the principles of mathematical induction can
inspire programming techniques and strategies for solving problems.
Here's how proof by induction concepts can be applied in programming:

1. Recursion: In recursion, a problem is divided into smaller, similar


subproblems. To solve the original problem, you solve the base case and
then use the solution to the smaller subproblem to solve the larger
problem.
2. Looping/Iteration: You start with an initial state (analogous to the base
case), perform a series of steps, and update the state until you reach the
desired result or condition.
3. Data Structures: In programming, data structures like linked lists, trees,
and graphs often rely on recursive or iterative techniques that resemble
mathematical induction. You establish a base case, and then you build up
the structure step by step by adding elements or nodes.

PROOF BY INDUCTION
4. Algorithm Design: When designing algorithms, you often break
down complex problems into simpler subproblems and solve them
recursively or iteratively. Dynamic programming, for example, is a
technique that uses recursion and memoization to optimize solutions
to problems by solving each subproblem once and storing the
results.
5. Inductive Thinking: The ability to think inductively, breaking down
problems into smaller, manageable parts, is a valuable skill in
programming. It helps you devise efficient and scalable solutions by
recognizing patterns and applying them to larger instances of the
problem.
6. Error Handling: In programming, you often consider edge cases
(base cases) when handling errors or exceptional situations. You
design code to handle these cases explicitly to ensure robustness.

PROOF BY INDUCTION
CHECKING A RECURSION USING INDUCTION
Base Case:
·We have F(n) == 1.
·The function terminates.

Inductive Step:
Induction hypothesis:
assume that the
algorithm is correct for
some value n = k > 1, that
is, factorial(k) does return
k!
Does the function work
for n = k + 1?

Numbers
CHECKING A RECURSION USING INDUCTION
P(k)

Therefore by
induction, factorial
(n) return n! for all
P(k + 1)
n > 1.

Numbers
RECURSION TREES
Calucop, Jamil T.
RECURSION TREES
Used for visualizing what happens when a recurrence is
iterated. It diagrams the tree of recursive calls and the
amount of work done at each call.
Models the cost(time) of a recursive execution of an
algorithm.
Mainly used to generate a close guess of the actual
complexity.
Total Cost(time) = Lc (Cost of Leaf Nodes) + Ic (Cost of
Internal nodes)

RECURSION TREES
RECURSION TREES

Recursion Trees
REASONS WHY RECURSION TREES ARE ONLY FOR GUESSES:
1. Complexity of Recursive Calls: In some recursive algorithms, the
number of recursive calls and the work done at each level of the
recursion can vary, making it challenging to create an accurate tree.
For example, in divide-and-conquer algorithms like merge sort or
quicksort, the work at each level depends on the size of the input
and the specific partitioning strategy.
2. Non-Uniform Work: In recursive algorithms, not all recursive calls
necessarily perform the same amount of work. Some calls might
involve more complex calculations or processing than others, making
it difficult to accurately model this variation in a simple tree.
3. Recursive Overhead: The recursion tree method often focuses on the
recursive calls themselves but may not account for the overhead
introduced by function calls and parameter passing. In some cases,
this overhead can be significant and impact the overall time
complexity.

RECURSION TREES
REASONS WHY RECURSION TREES ARE ONLY FOR GUESSES:

4. Tail Recursion: In some recursive algorithms, especially those that


are tail-recursive, the compiler or interpreter can optimize the
recursion so that it doesn't create additional function calls on the call
stack. This optimization can make the actual time complexity
different from what the recursion tree suggests.
5. Approximation: Sometimes, the goal of using a recursion tree is not
to find the exact time complexity but to obtain a rough estimate or
understanding of the growth rate. This can be sufficient for high-level
analysis or making informed decisions about algorithm selection
without getting into precise mathematical calculations.

RECURSION TREES
THANK YOU
Thank You

Thank You
Thank You

Thank You

Thank You

Thank You Thank You

You might also like