Algorithm (1)
Algorithm (1)
SCHOOL OF COMPUTING
Group Assignment
Name ID
1. Kaleb Getachew 1402313
2. Bikila Mitiku 1401543
3. Tashale Getachew 1403067
4. Hermela Bogale 1405090
5. Kerima Buser 1406443
2. Resource Management
By analyzing complexity, we can estimate the resources (time and space) required by an algorithm. This
helps in managing and optimizing the use of computational resources, which is particularly important in
environments with limited resources.
3. Comparative Analysis
Complexity analysis allows us to compare different algorithms for the same problem. By understanding
their time and space complexities, we can choose the most efficient algorithm for a given situation.
4. Scalability
Algorithms with lower complexity are generally more scalable. This means they can handle larger inputs
without a significant increase in resource consumption, making them suitable for real-world applications
where data sizes can be enormous.
5. Identifying Bottlenecks
Analyzing complexity helps identify parts of the algorithm that are potential bottlenecks. This allows
developers to focus on optimizing these critical sections to improve overall performance.
6. Theoretical Insights
Complexity analysis provides theoretical insights into the behavior of algorithms. It helps in
understanding the inherent difficulty of problems and the limits of what can be achieved with current
computational methods.
7. Algorithm Design
When designing new algorithms, complexity analysis guides the development process. It helps in creating
algorithms that are not only correct but also efficient and practical for real-world use.
8. Cost Efficiency
Efficient algorithms reduce computational costs, which is particularly important in commercial and
industrial applications where processing power and time translate directly into expenses.
9. Optimization
Understanding complexity is the first step towards optimizing algorithms. It provides a baseline for
measuring improvements and ensures that optimizations lead to tangible benefits in performance.
A probabilistic algorithm uses random numbers or random decisions during its execution to influence
its behavior. It may produce different outcomes for the same input on different executions, depending on
the random choices made.
Deterministic Algorithm
A deterministic algorithm operates in a completely predictable manner. Given the same input, it will
always follow the same sequence of steps and produce the same output.
Key Differences
Aspect Probabilistic Algorithm Deterministic Algorithm
Execution Incorporates randomness; may have multiple Predictable; produces the same
Behavior outcomes. result for the same input.
Output Can be probabilistic, with correctness or Always produces a definite and
performance guaranteed only with some consistent result.
probability.
Efficiency Often simpler or faster for some problems but Efficiency depends on the
might trade accuracy. algorithm and problem.
Applications Cryptography, Monte Carlo simulations, Sorting, searching, mathematical
optimization. computation.
Flowcharts
Deterministic Algorithms Flowchart Probabilistic Algorithms Flowchart
The flowcharts above visually compare Probabilistic Algorithms and Deterministic Algorithms:
1. Probabilistic Algorithm:
o Introduces randomness in the "Perform Operation" step.
o May lead to different outcomes depending on random choices.
2. Deterministic Algorithm:
o Follows a fixed sequence of operations.
o Produces the same output for identical inputs every time.
These flowcharts highlight the key structural difference: the use of randomness in probabilistic
algorithms versus predictability in deterministic algorithms.
Backtracking is an algorithmic technique used for solving problems incrementally by building candidates for
the solution step by step, and removing those candidates ("backtracking") that fail to satisfy the constraints
of the problem. It is often used for problems involving search or optimization where a sequence of decisions
is required to arrive at a solution.
1. N-Queens Problem: Place NN queens on an N×NN \times N chessboard such that no two queens
threaten each other.
2. Sudoku Solver: Fill a 9×99 \times 9 grid such that every row, column, and subgrid contains unique
numbers from 1 to 9.
3. Knapsack Problem (with constraints): Find subsets of items that fit a given weight limit and
maximize value.
Advantages of Backtracking
Limitations
• Can be slow for problems with large solution spaces (e.g., exponential time complexity).
• Performance depends heavily on the efficiency of the feasibility checks.
Dynamic Programming (DP) and Divide-and-Conquer are two powerful algorithmic paradigms used for
solving complex problems by breaking them into smaller subproblems. While they share similarities, their
approaches and applications differ significantly. Here's a comparison:
1. Key Idea:
o Solves problems by breaking them into overlapping subproblems and storing the solutions to
subproblems to avoid redundant computation.
o Utilizes a table (memorization or tabulation) to keep track of already-computed results.
2. Subproblem Overlap:
o Subproblems are solved multiple times during the computation, so DP avoids recompilation
by storing results.
o Example: Fibonacci sequence, shortest paths (e.g., Dijkstra’s, Floyd-Warshall).
3. Optimal Substructure:
o DP requires the problem to have an optimal substructure, meaning the optimal solution to the
problem can be constructed from the optimal solutions of its subproblems.
o Example: Knapsack problem, Longest Common Subsequence (LCS).
4. Approach:
o Bottom-Up: Build the solution iteratively from smaller subproblems.
o Top-Down: Use recursion with memorization to cache results.
5. Time Complexity:
o Typically, faster than divide-and-conquer for problems with overlapping subproblems, as
results are reused.
Divide-and-Conquer
1. Key Idea:
o Solves problems by dividing them into independent subproblems, solving these recursively,
and then combining their results.
o Does not store results of subproblems for reuse.
2. Independent Subproblems:
o Subproblems are independent and do not overlap, meaning they are solved separately without
sharing information.
o Example: Merge Sort, Quick Sort, Binary Search.
3. Optimal Substructure:
o Divide-and-conquer also requires optimal substructure but focuses on independent solutions
to combine them.
4. Approach:
o Divide the problem into smaller subproblems.
o Solve each subproblem recursively.
o Combine the results to solve the original problem.
5. Time Complexity:
o Often leads to higher overhead due to recursive calls and recomputation of subproblems.
Key Differences
Feature Dynamic Programming Divide-and-Conquer
Subproblem Overlapping subproblems Independent subproblems
Type
Storage Stores subproblem results Does not store results
(memorization/tabulation)
Recomputation Avoided using memorization May recompute results
Applications Optimization problems Sorting, searching, tree-based
problems
Examples Fibonacci, LCS, Knapsack Merge Sort, Quick Sort, Binary
Search
A greedy algorithm is an algorithmic paradigm that makes a series of decisions, choosing the option
that seems best at each step (locally optimal choice) in the hope that this will lead to a globally optimal
solution. It does not reconsider its choices after making them, which makes it efficient but sometimes
suboptimal.
1. Local Optimality:
o The algorithm selects the best option available at each step without considering the
bigger picture.
2. Greedy Choice Property:
o A globally optimal solution can be arrived at by selecting local optimal choices at
every step.
3. Optimal Substructure:
o The solution to a problem can be constructed efficiently from solutions to its
subproblems.
4. No Backtracking:
o Greedy algorithms do not backtrack or revise choices once made.
Greedy algorithms work best when the following conditions are satisfied:
When these conditions are not met, a greedy algorithm might fail to produce the correct solution.
Advantages
Limitations
• Non-Optimal Solutions: Does not guarantee a globally optimal solution for problems lacking
the greedy choice property or optimal substructure.
• Irrevocable Choices: Once a choice is made, it cannot be undone, which may lead to
suboptimal results.
Problem:
Given n activities with start and finish times, select the maximum number of activities that don’t
overlap.
Greedy Approach:
Time Complexity:
• Sorting: O (n log n)
• Selection: O(n)O(n)
Total: O (n log n)
Summary
The divide-and-conquer approach is an algorithm design paradigm that solves a problem by recursively
breaking it into smaller subproblems, solving these subproblems independently, and then combining their
solutions to solve the original problem.
Key Characteristics
• Recursive Nature:
o The process is inherently recursive, as each subproblem is further divided until it becomes
trivially solvable.
• Independent Subproblems:
o Subproblems are usually independent of one another, which makes parallelism possible.
• Optimal Substructure:
o The solution to the overall problem can be constructed from the solutions to its subproblems.
Applications of Divide-and-Conquer
1. Sorting Algorithms:
o Merge Sort:
▪ Divide the array into halves, sort each half recursively, and merge the sorted halves.
o Quick Sort:
▪ Partition the array into elements smaller and larger than a pivot, then recursively sort
the partitions.
2. Searching Algorithms:
o Binary Search:
▪ Divide the sorted array into halves and determine which half contains the target
element.
3. Matrix Multiplication:
o Strassen’s Algorithm:
▪ Breaks matrices into smaller submatrices, performs multiplications recursively, and
combines results.
4. Closest Pair of Points (Computational Geometry):
o Divide the points into two halves, find the closest pair in each half, and then find the closest
pair across the divide.
5. Fast Fourier Transform (FFT):
o Decomposes a signal into smaller signals for efficient computation of its frequency
components.
Advantages of Divide-and-Conquer
1. Simpler Design:
o Breaking down a complex problem into smaller, more manageable parts simplifies the logic.
2. Efficiency:
o Many divide-and-conquer algorithms have logarithmic or linear time complexity (e.g., O (n
\log n) for Merge Sort).
3. Parallelism:
o Independent subproblems can be solved simultaneously, leveraging modern multi-core
processors.
Limitations
1. Overhead:
o Recursive calls and recombination steps can introduce overhead, particularly for small
problems.
2. Not Always Optimal:
o If subproblems overlap, divide-and-conquer may recompute solutions, making it less efficient
(e.g., naïve Fibonacci).
3. Requires Optimal Substructure:
o The problem must have a structure where the solution to the overall problem can be derived
from solutions to its parts.
Steps:
• Divide: O (1)
• Conquer: 2T(n/2)
• Combine: O(n)
Summary
Divide-and-conquer is a powerful and versatile paradigm best suited for problems that can be:
In algorithm analysis, we evaluate the performance of an algorithm based on how long it takes (its time
complexity) or how much space it uses (its space complexity) as a function of the input size. These analyses
are categorized into worst-case, average-case, and best-case analysis, which refer to different scenarios for
measuring an algorithm's efficiency.
1. Worst-Case Analysis
• Definition:
o The worst-case analysis evaluates the algorithm’s performance under the least favorable
conditions, where the input is arranged in such a way that it causes the algorithm to take the
longest possible time or use the most resources.
• Use Case:
o Provides an upper bound on the algorithm’s performance, ensuring that no matter what input
is given, the algorithm will not exceed the worst-case time or space limits.
• Example:
o For Quick Sort, the worst-case occurs when the pivot is always the smallest or largest element
(e.g., when the input array is already sorted), resulting in a time complexity of O(n^2).
o For Merge Sort, the worst-case time complexity is O (n \log n), which remains the same
regardless of the input.
• Importance:
o Worst-case analysis is critical in real-time or safety-critical systems where performance must
always meet certain guarantees.
2. Average-Case Analysis
• Definition:
o The average-case analysis looks at the expected performance of an algorithm over a random
distribution of inputs. It calculates the expected time or space complexity by averaging the
performance of the algorithm across all possible inputs of a given size.
• Use Case:
o Used when you expect most inputs to fall within a "typical" range, and you want to understand
the algorithm's behavior under normal conditions.
• Example:
o For Quick Sort, the average-case time complexity is O (n \log n) since, on average, the pivot
divides the array into two nearly equal parts.
• Importance:
o Average-case analysis gives a more realistic idea of the algorithm's performance in practice,
especially when input data is not always pathological.
3. Best-Case Analysis
• Definition:
o The best-case analysis evaluates the algorithm's performance under the most favorable
conditions, where the input is arranged to allow the algorithm to complete its task in the
minimum time or use the least number of resources.
• Use Case:
o Provides insight into how well the algorithm performs in ideal situations, though it may not
be as useful for practical purposes, as these ideal situations might not occur often.
• Example:
o For Insertion Sort, the best case occurs when the array is already sorted. In this case, the
algorithm performs only O(n)O(n) comparisons, as no elements need to be moved.
• Importance:
o Best-case analysis can help understand the potential performance of the algorithm in ideal
scenarios, but it’s generally not used as a primary factor in decision-making.
Key Differences
1. Quick Sort:
o Worst-case: O (n^2) (when the pivot is the smallest or largest element).
o Average-case: O (n \log n) (when the pivot divides the array evenly).
o Best-case: O (n \log n) (when the pivot divides the array into equal halves).
2. Merge Sort:
o Worst-case: O (n \log n) (always divides the array into halves and merges).
o Average-case: O (n \log n).
o Best-case: O (n \log n).
3. Insertion Sort:
o Worst-case: O(n^2) (when the array is in reverse order).
o Average-case: O(n^2) (on average, half of the elements are out of order).
o Best-case: O(n) (when the array is already sorted).
Summary
• Worst-case analysis provides the upper bound for performance, ensuring an algorithm won’t perform
worse than a certain threshold.
• Average-case analysis considers the expected behavior, reflecting more realistic performance in
everyday scenarios.
• Best-case analysis shows the optimal performance, which is often theoretical and rarely encountered
in practice.
9. Explain the purpose of Big O, Big Theta, and Big Omega notations.
1. Big O (O-notation):
o Purpose: Describes the upper bound of an algorithm’s time or space complexity. It indicates
the worst-case scenario, showing the maximum growth rate of the algorithm as the input size
increases.
o Example: O(n^2) means the algorithm will never take more than n2n^2 time.
2. Big Theta (Θ-notation):
o Purpose: Describes the tight bound of an algorithm’s complexity. It gives both the upper
and lower bounds, meaning it describes the exact growth rate of the algorithm for large
inputs.
o Example: Θ (n \log n) means the algorithm takes n \log n time for both best and worst cases.
3. Big Omega (Ω-notation):
o Purpose: Describes the lower bound of an algorithm’s complexity. It provides a guarantee
for the best-case scenario, showing the minimum time or space, the algorithm will take.
o Example: Ω(n)\Omega(n) means the algorithm will take at least n time in the best case.
Summary
The time complexity of an algorithm measures how its performance changes as the input size nn increases.
To calculate it systematically, follow these steps:
Identify the Basic Operation: Determine the most frequently executed operation, such as a comparison,
assignment, or iteration. This operation defines the overall complexity of the algorithm.
Analyze the Algorithm Structure: Examine the algorithm to identify loops, recursive calls, and conditionals.
For loops, count their iterations. For recursion, derive a recurrence relation to estimate its execution time.
Count the Frequency of the Basic Operation: Calculate how often the basic operation is executed based on
the input size n. For single loops, the frequency equals the number of iterations. For nested loops, multiply
the iterations of the inner and outer loops.
Express the Complexity as a Function of n: Combine the results into a mathematical function representing
the total number of basic operations in terms of n.
Simplify Using Big O Notation: Simplify the function by keeping the term with the highest growth rate and
ignoring constants and lower-order terms.
For example, in a single loop that runs n times, the basic operation is executed n times, leading to a time
complexity of O(n). In a nested loop where the outer and inner loops both iterate n times, the basic operation
executes n × n =n2 times, resulting in O(n2). For divide-and-conquer algorithms like Merge Sort, the
recurrence relation T(n)=2T(n/2) + O(n)T(n) = 2T(n/2) + O(n) simplifies to O (n log n).
By following these steps, you can determine the time complexity of an algorithm and understand its
scalability as the input size grows.