Algorithms and Complexity
Algorithms and Complexity
COMPLEXITY
DUNGOG, ERLAN JUSTIN C.
CALUCOP, JAMIL
LERIO, VINCE JIMWEL
2023
Algorithm and Complexity
GRAPHS
DEFINITION
2023
KEY COMPONENTS OF GRAPH:
Nodes
These are the fundamental elements of a
graph, representing individual entities or
points.
Edges
are the connections between nodes and
represent relationships or interactions
between the corresponding entities.
Weight
is assigned a numerical value (weight) that
represents some measure of the strength
or cost.
IMPLEMENTATION
To understand the computational and complexity
that involves in appropriate data structures and
designing algorithms.
Pros Cons
Easy to understand and implement. Consumes more memory for large graphs.
ADJACENCY LIST
Pros Cons
Consumes less memory for large graphs. Less efficient for checking the presence of an
edge between two nodes in dense graphs.
EDGE LIST
Pros Cons
SEARCHING
refers to the process of finding
specific elements or patterns within
a graph data structure.
Finding connected components: You can use DFS to find all the nodes reachable
from a given node in a connected component of a graph.
Detecting cycles: DFS can be used to detect cycles in a graph, which is useful in
tasks like topological sorting or checking for the presence of loops.
Pathfinding: You can use DFS to find paths between nodes in a graph.
Maze solving: DFS can be applied to solve mazes or puzzles by exploring different
paths until a solution is found.
STEPS IN DEPTH-FIRST SEARCH
1. Start at a source node: You begin the traversal at a starting node (or vertex) in the
graph.
2. Visit and mark the node: You visit and mark the current node as visited to keep track
of the nodes you've explored.
3. Explore unvisited neighbors: From the current node, you choose an unvisited neighbor
(adjacent node) and move to that neighbor. This neighbor becomes the new current
node.
4. Recursively repeat: You repeat steps 2 and 3 for the new current node, exploring as
deeply as possible along each branch before backtracking to explore other branches.
5. Backtrack: When there are no unvisited neighbors from the current node, you
backtrack to the previous node and continue exploring other unvisited neighbors.
6. Repeat until all nodes are visited: You continue this process until you've visited all
nodes in the graph or until you've achieved your specific goal (e.g., finding a specific
node, detecting cycles, or traversing a path).
Sample codes for DFS:
Output:
Advantage and Disadvantages of using Depth-First Search
Advantages:
Simplicity: DFS is relatively easy to implement, both using recursive and iterative approaches. It doesn't require any
additional data structures other than a stack (for the iterative version) or the call stack (for the recursive version).
Memory Efficiency: The iterative version of DFS is memory-efficient because it uses a small amount of extra memory to
maintain the stack, which typically has a smaller memory footprint compared to the queue used in Breadth-First Search
(BFS).
Space Complexity: The space complexity of DFS is often better than BFS in terms of average case performance. DFS
tends to use less memory in graphs with a large branching factor.
Path Finding: DFS is useful for finding paths in a graph, such as finding a path between two nodes or finding all paths
from a source to a destination.
Easily Modified: DFS can be easily modified or extended to solve various graph-related problems, including cycle
detection, connectivity analysis, and more.
Disadvantages:
Completeness: DFS may not find all possible paths or solutions in certain cases. It depends on the order in which nodes
are visited, and it may not explore all possible branches of the graph.
Non-Optimal Path: When used for finding the shortest path in weighted graphs, DFS is not guaranteed to find the optimal
path. It can find a path that is longer than the shortest one.
Stack Overflow (Recursion): In the recursive implementation of DFS, deep or infinite recursion can lead to a stack
overflow error, especially in graphs with deep branches.
Time Complexity: In worst-case scenarios, DFS can take exponential time, especially in graphs with many branching
possibilities, which can lead to inefficient performance.
Not Suitable for All Problems: DFS may not be the best choice for some graph problems, such as finding the shortest path
in weighted graphs, where algorithms like Dijkstra's or A* may be more appropriate.
BREADTH-FIRST SEARCH
Breadth-First Search (BFS) is a fundamental graph traversal algorithm used in
computer science and algorithms. It is primarily used for exploring and traversing
graphs or trees in a systematic manner.
STEPS IN BREADTH-FIRST SEARCH
Start at the Initial Node: Begin the traversal by selecting a starting node in the
graph or tree. This is where the exploration process will commence.
Explore Neighbors: Visit and process the current node. Explore all unvisited
neighbors of the current node before moving on to their neighbors. This ensures
that you traverse nodes at the same depth level before going deeper.
Queue Unvisited Neighbors: Add the unvisited neighbors of the current node to a
queue data structure. This step maintains a FIFO (First-In-First-Out) order for
node exploration, ensuring that nodes at the same level are processed before
moving deeper into the graph.
Dequeue and Repeat: Dequeue the front node from the queue, making it the new
current node for exploration. Repeat steps 2 and 3 for the newly selected current
node until the queue is empty.
Terminate: Once the queue is empty, the BFS traversal is complete. You have
visited all nodes reachable from the initial node
SAMPLE CODES:
OUTPUT:
Advantage and Disadvantages of using Breadth-First Search
Advantages:
Guaranteed Shortest Path: BFS guarantees that it finds the shortest path between two nodes in an unweighted graph. It explores nodes level
by level, ensuring that shorter paths are found before longer ones.
Completeness: BFS is a complete algorithm, meaning it will always find a solution if one exists. If there is a path from the source to the target,
BFS will find it.
Exploration of Neighbor Nodes: BFS explores all neighbor nodes of a given node before moving on to their neighbors. This can be useful for
certain tasks like finding connected components or detecting cycles.
Memory Efficiency: BFS can be implemented using a queue data structure, which typically uses less memory compared to some other graph
algorithms like Depth-First Search (DFS), which can use a recursive stack.
Identifying Connected Components: BFS can be used to identify and enumerate connected components in an undirected graph efficiently.
Disadvantages:
Memory Usage: While BFS is memory-efficient compared to some other algorithms, it still requires more memory than DFS in most cases
because it needs to store all nodes at the current level in a queue. This can be a drawback in graphs with a large number of nodes.
Not Suitable for Weighted Graphs: BFS is designed for unweighted graphs. It does not take edge weights into account when finding paths,
which can lead to suboptimal results in weighted graphs.
Longer Paths in Practice: In graphs with many branches and deep levels, BFS may explore a large portion of the graph before finding a path,
leading to longer execution times in practice.
Queue Management Overhead: The queue data structure used in BFS can have overhead in terms of memory and processing, especially in
highly dynamic graphs where nodes are frequently added or removed.
Doesn't Handle Cycles Well: BFS may not handle cycles efficiently in directed graphs. Without additional checks or algorithms, it can get stuck
in an infinite loop when encountering cycles.
No Solution vs. Longer Paths: BFS does not provide information about the existence of shorter paths when it finds a path. In some cases, you
may want to know if there are shorter paths in addition to the one found.
RECURSIVE PROCEDURES
IN ALGORITHM
RECURSIVE PROCEDURE
Base Case: Define a condition that specifies when the recursion should stop. This
condition is crucial to prevent infinite recursion and ensure that the algorithm
terminates. The base case typically provides a direct answer or solution to the
smallest possible instance of the problem.
Combine Results: In many recursive algorithms, you'll need to combine the results
from the recursive calls to solve the original problem. This step may involve
mathematical operations, merging data structures, or other relevant processing.
RECURSIVE VS ITERATION
Recursion: Recursive solutions are well-suited for problems with inherent
recursive structures, such as tree traversal or problems involving divide
and conquer strategies.
Iteration: Iterative solutions are suitable for problems that can be solved
with repetitive, sequential steps, such as searching through an array or
performing mathematical calculations.
Recursive Procedure for Connected Component
Often used to identify and label connected components within a graph. Connected components
are sets of vertices within a graph where each vertex is connected to every other vertex in the
same component, and there are no connections between vertices in different components.
Fibonacci Sequence
The Fibonacci sequence and recursive procedures are closely related because the Fibonacci
sequence is often used as a classic example to illustrate the concept of recursion in computer
science and mathematics. Recursive procedures are a natural way to generate the Fibonacci
sequence.
The Fibonacci sequence is defined recursively, where each number in the sequence is the sum of
the two preceding numbers. This recursive definition can be translated directly into a recursive
algorithm or procedure for generating Fibonacci numbers
Recursive Data Structure
Contains instances of the same data structure within itself. This concept is
often used in computer science and programming to solve problems where
the data can be naturally broken down into smaller, similar subproblems.
Recursive Backtracking
A popular technique used in computer science and algorithm design to solve problems
that involve finding solutions through a sequence of choices or decisions.
How Recursive Backtracking works in a recursive
procedure or algorithm:
Choice and Exploration: At each step of the algorithm, you have a set of choices or options
to consider. You make one choice and proceed with the exploration of that choice.
Recursion: You use a recursive procedure to explore the consequences of the chosen option.
This means that you call the same algorithm/function recursively with a modified state or
context to explore the next step.
Backtracking: If, during the exploration, you find that the current choice leads to a dead
end or doesn't satisfy the problem's constraints or conditions, you backtrack. Backtracking
means that you return from the current recursive call to the previous one and explore other
options from there.
Termination Condition: The recursion continues until you find a solution (or solutions) that
satisfy the problem's conditions, or you have explored all possible choices.
LIMITATIONS OF RECURSIVE PROCEDURE:
Stack Overflow: Recursive algorithms often rely on the call stack to manage function calls. If
the recursion depth is too deep, it can lead to a stack overflow error, causing the program to
crash. This limitation restricts the size of the input that can be processed using recursion.
Performance Overhead: Recursive function calls can introduce a performance overhead due to
the repeated creation of stack frames. This overhead can be significant for deeply nested or
large-scale recursive algorithms and may result in slower execution compared to iterative
alternatives.
Memory Usage: Each recursive call consumes memory by creating a new stack frame that stores
local variables and function call information. For problems with a high degree of recursion or
large input data, memory consumption can become a concern.
Limited Applicability: Not all problems have a natural recursive structure, and attempting to
solve them recursively may lead to convoluted and inefficient solutions. Recursive algorithms
are best suited for problems that can be naturally divided into smaller, similar subproblems.
PROOF BY INDUCTION
EXAMPLE:
We have an infinite number of dominos labeled 1, 2, 3….,
Let P(n) be the proposition that the nth domino is knocked over.
DOMINOES
Once we start to knock down the dominoes the P(1) will be
true
We know that if the kth domino is knocked down, it will knock over the
(k+1)st domino, or P(k)→ P(k+1). This makes it true for all positive integers
of k.
Hence all dominoes are knocked over. And P(n) is true for all positive
integers.
PROOF BY INDUCTION
Proof by Mathematical Induction:
Statement: For all positive integers n, the sum of the first n positive
integers is given by the formula.
PROOF BY INDUCTION
Proof by induction is not directly used as a proof technique as it is in
mathematics. However, the principles of mathematical induction can
inspire programming techniques and strategies for solving problems.
Here's how proof by induction concepts can be applied in programming:
PROOF BY INDUCTION
4. Algorithm Design: When designing algorithms, you often break
down complex problems into simpler subproblems and solve them
recursively or iteratively. Dynamic programming, for example, is a
technique that uses recursion and memoization to optimize solutions
to problems by solving each subproblem once and storing the
results.
5. Inductive Thinking: The ability to think inductively, breaking down
problems into smaller, manageable parts, is a valuable skill in
programming. It helps you devise efficient and scalable solutions by
recognizing patterns and applying them to larger instances of the
problem.
6. Error Handling: In programming, you often consider edge cases
(base cases) when handling errors or exceptional situations. You
design code to handle these cases explicitly to ensure robustness.
PROOF BY INDUCTION
CHECKING A RECURSION USING INDUCTION
Base Case:
·We have F(n) == 1.
·The function terminates.
Inductive Step:
Induction hypothesis:
assume that the
algorithm is correct for
some value n = k > 1, that
is, factorial(k) does return
k!
Does the function work
for n = k + 1?
Numbers
CHECKING A RECURSION USING INDUCTION
P(k)
Therefore by
induction, factorial
(n) return n! for all
P(k + 1)
n > 1.
Numbers
RECURSION TREES
Calucop, Jamil T.
RECURSION TREES
Used for visualizing what happens when a recurrence is
iterated. It diagrams the tree of recursive calls and the
amount of work done at each call.
Models the cost(time) of a recursive execution of an
algorithm.
Mainly used to generate a close guess of the actual
complexity.
Total Cost(time) = Lc (Cost of Leaf Nodes) + Ic (Cost of
Internal nodes)
RECURSION TREES
RECURSION TREES
Recursion Trees
REASONS WHY RECURSION TREES ARE ONLY FOR GUESSES:
1. Complexity of Recursive Calls: In some recursive algorithms, the
number of recursive calls and the work done at each level of the
recursion can vary, making it challenging to create an accurate tree.
For example, in divide-and-conquer algorithms like merge sort or
quicksort, the work at each level depends on the size of the input
and the specific partitioning strategy.
2. Non-Uniform Work: In recursive algorithms, not all recursive calls
necessarily perform the same amount of work. Some calls might
involve more complex calculations or processing than others, making
it difficult to accurately model this variation in a simple tree.
3. Recursive Overhead: The recursion tree method often focuses on the
recursive calls themselves but may not account for the overhead
introduced by function calls and parameter passing. In some cases,
this overhead can be significant and impact the overall time
complexity.
RECURSION TREES
REASONS WHY RECURSION TREES ARE ONLY FOR GUESSES:
RECURSION TREES
THANK YOU
Thank You
Thank You
Thank You
Thank You
Thank You