0% found this document useful (0 votes)
7 views

Assign 01

The document provides a comprehensive analysis of time complexity for various algorithms, focusing on searching (Linear and Fibonacci Search) and sorting (Bubble and Merge Sort) algorithms. It details the worst, best, and average-case complexities using Big-O notation, highlighting the efficiency and inefficiencies of each method. Additionally, it emphasizes the importance of choosing appropriate algorithms based on dataset size and characteristics.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views

Assign 01

The document provides a comprehensive analysis of time complexity for various algorithms, focusing on searching (Linear and Fibonacci Search) and sorting (Bubble and Merge Sort) algorithms. It details the worst, best, and average-case complexities using Big-O notation, highlighting the efficiency and inefficiencies of each method. Additionally, it emphasizes the importance of choosing appropriate algorithms based on dataset size and characteristics.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 5

Time Complexity Analysis of Algorithms

Submitted to
Dr. Naveed Anwar Butt
Submitted by
Tooba Khan 22021519-016
Class
BSCS – 6 – C
Course Name
Design and Analysis of Algorithms
Semester
Spring 2025
Deadline
20th February 2025

Hafiz Hayat Campus


Section 01: Theoretical Analysis of Time Complexity
using Big-O Notation

Task 01: Searching Algorithms

Case 01: Linear Search


Finding Recursion, Loops, and Important Operations
• The function uses a for loop to iterate over the full array.
• Verifying that arr[i] == x inside the loop is the crucial operation.
• In the worst scenario, the algorithm looks at each array entry before returning -1.

Counting Iterations in Relation to Input Size n: The loop takes n iterations in the
worst case (where the element is at the last location or cannot be retrieved) when it runs
from i = 0 to i = n-1.

Using the Big-O Rules


• Since the number of operations increases linearly with the size of the input, n is the
major term in this case.
• The time complexity for dropping constants is O(n).

Conclusion
 Worst-case complexity: O(n) (element is not present or found at the last position).
 Best-case complexity: O(1) (element is found at the first position).
 Average-case complexity: O(n) (element is expected to be somewhere in the middle on average).

Case 02: Fibonacci Search


Finding Recursion, Loops, and Important Operations
• The function begins by producing Fibonacci numbers in a while loop, continuing until
the largest Fibonacci number is at least as large as n.
• Another while loop in the algorithm's main body checks elements at Fibonacci-indexed
positions to narrow down the search space.
• Like Binary Search, each iteration drastically shrinks the search space.

Iteration Counting in Connection with Input Size


• Every iteration reduces the search space by a Fibonacci fraction.
• Like Binary Search, which operates in O(log n), the number of iterations is around
log_φ(n), where φ = 1.618 (Golden Ratio).

Using the Big-O Rules


• Log_φ(n), the dominant term, rises asymptotically as O(log n).
• When constants are dropped, the ultimate complexity stays O(log n).

In conclusion
• The worst-case complexity, which is comparable to Binary Search, is O(log n).
• If the element appears in the first few comparisons, the best-case complexity is O(1).
• O(log n) is the average-case complexity.

A Comparison of Binary and Fibonacci Search


• Because Binary Search uses simple index calculations, it is typically quicker and more
useful.
In memory-constrained settings where division and multiplication are expensive
processes, Fibonacci Search may be useful. Both approaches perform better than linear
search, which makes them perfect for searching in sorted datasets.

Task 02: Sorting Algorithms

Case 01: Bubble Sort


Finding Recursion, Loops, and Important Operations
Two nested loops make up Bubble Sort.
• The outer loop executes n times, moving the largest unsorted element to its proper location with
each pass.
• To compare and swap neighboring elements, the inner loop executes (n-1), (n-2),..., 1 times.
• Comparing and switching neighboring components is the main function.

Iteration Counting in Connection with Input Size


• There are (n−1)+(n−2)+...+1=n(n−1) comparisons/swaps in total....\frac{n(n-1)}{2}(n−1)+
(n−2)+...+1=2n(n−1) = 2(n-1) + (n-2) +... + 1
In the worst situation, this reduces to O(n²).

Using the Big-O Rules


• We obtain O(n²) by dropping constants and lower-order terms.
In conclusion
• When the array is sorted in reverse order, the worst-case complexity is O(n²).
• Best-case complexity: O(n) (requiring only one iteration without swaps when the array is
already sorted).
• The average complexity (randomly sorted inputs) is O(n²).

Why bubble sorting huge datasets is inefficient


• High Time Complexity: Bubble Sort is ineffective for large datasets because it necessitates
O(n²) comparisons in both the worst and average scenarios.
• Needless Swaps: Bubble Sort is slower than more sophisticated sorting algorithms like Merge
Sort or Quick Sort because it creates a lot of needless swaps.

Case 02: Merge Sort

Finding Recursion, Loops, and Important Operations


• The strategy used by Merge Sort is divide and conquer:
1. Divide: Until single-element subarrays are reached, the array is recursively divided in half.
2. Conquer: The separated parts are combined once more.
• The subarrays' splitting and merging are important operations.

Iteration Counting in Connection with Input Size n


• Log(n) times (depth of recursion), the array is periodically split in half.
• At each recursion level, n items are combined during the merging process.
• n log(n) is proportional to the total number of operations.

Using the Big-O Rules


• Because the method makes log n recursive calls, each handling n elements, the dominant term
is O(n log n).
• O(n log n) remains after constants and lower-order terms are eliminated.

In conclusion
• The worst-case complexity, which always necessitates merging, is O(n log n).
• Best-case complexity: O(n log n) (recursive calls are made even if the data has already been
sorted).
• Because of the consistent divide-and-conquer structure, the average-case complexity is O(n log
n).
Merge sort and fast sort performance comparison
When stability is crucial, such as when sorting records with several fields, merge sort is
recommended.
• Because QuickSort is in-place and requires less memory, it is typically faster in practice.
• For big datasets, both algorithms perform better than O(n2) sorting algorithms like Bubble Sort
and Insertion Sort.

Section 2: Empirical Analysis of Time Complexity

Task 2: Measuring Execution Time

Measuring Execution Time

You might also like