0% found this document useful (0 votes)
4 views6 pages

Quick Sort and Its Worst-Case Analysis

The document covers various algorithms including Quick Sort, Huffman Encoding, Matrix Chain Multiplication, and graph traversal methods like BFS and DFS. Quick Sort is a divide-and-conquer algorithm with a worst-case time complexity of O(n²), while Huffman Encoding is a lossless compression technique with an overall complexity of O(n log n). Matrix Chain Multiplication optimizes the order of multiplications with a complexity of O(n³), and both BFS and DFS are fundamental graph traversal algorithms with a time complexity of O(V+E).

Uploaded by

Anushka Paul
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views6 pages

Quick Sort and Its Worst-Case Analysis

The document covers various algorithms including Quick Sort, Huffman Encoding, Matrix Chain Multiplication, and graph traversal methods like BFS and DFS. Quick Sort is a divide-and-conquer algorithm with a worst-case time complexity of O(n²), while Huffman Encoding is a lossless compression technique with an overall complexity of O(n log n). Matrix Chain Multiplication optimizes the order of multiplications with a complexity of O(n³), and both BFS and DFS are fundamental graph traversal algorithms with a time complexity of O(V+E).

Uploaded by

Anushka Paul
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

Quick sort:

Quick Sort and Its Worst-Case Analysis

Quick Sort Algorithm:

Quick Sort is a divide-and-conquer sorting algorithm that works as follows:

1. Choose a Pivot: Select an element from the array (e.g., first, last, random, or median).
2. Partitioning: Rearrange elements so that:
o Elements smaller than the pivot go to the left.
o Elements greater than the pivot go to the right.
3. Recursion: Recursively apply the above steps to the left and right subarrays until the
base case is reached (single-element subarrays).

Worst-Case Analysis:

The worst case occurs when the pivot selection leads to the most unbalanced partitions. This
happens in two common cases:

1. Pivot is Always the Smallest or Largest Element (e.g., sorted or reverse-sorted


arrays when the first or last element is chosen as the pivot).
2. Highly Skewed Partitioning: If one partition has n−1n-1 elements and the other has
0 elements.

In the worst case, the recursive calls resemble a degenerate linked list structure, leading to
T(n) = T(n-1) + O(n).

Solving this recurrence using the summation method:

T(n)=T(n−1)+O(n)
T(n) = T(n-2) + O(n-1) + O(n)
T(n)=O(1)+O(2)+⋯+O(n)
T(n) = O(n^2)

Thus, the worst-case time complexity of Quick Sort is O(n²).

Avoiding Worst Case:

• Use Randomized Quick Sort (choosing a random pivot).


• Use Median-of-Three Pivot Selection.
• Hybrid Sorting (e.g., switch to Insertion Sort for small partitions).

Best and Average Case Complexity:

• Best Case (Balanced Partitioning): O(n log n).


• Average Case: O(n log n).

Quick Sort is often preferred for practical sorting due to its low constant factors and
efficient in-place sorting behavior, despite its worst-case potential.
Huffman Encoding

Huffman Encoding is a lossless data compression algorithm that assigns variable-length binary codes
to characters based on their frequencies. It ensures that more frequent characters get shorter codes,
while less frequent characters get longer codes.

Steps in Huffman Encoding

1. Calculate Frequency: Count the occurrences of each character in the input.

2. Build a Min-Heap: Create a priority queue (min-heap) where each node represents a
character and its frequency.

3. Construct Huffman Tree:

o Remove the two nodes with the smallest frequency from the heap.

o Merge them into a new node with a combined frequency.

o Insert the new node back into the heap.

o Repeat until only one node (root) remains.

4. Generate Huffman Codes:

o Traverse the Huffman tree.

o Assign '0' to the left and '1' to the right at each level.

o Generate unique binary codes for each character.

5. Encoding & Decoding:

o Replace characters in the input with their corresponding binary codes.

o To decode, follow the binary sequence from the root of the Huffman tree.

Example

Given String:

BCAADDDCCACACAC

Step 1: Frequency Calculation

Character Frequency

A 5

B 1

C 6

D 3
Step 2: Building the Huffman Tree

1. Combine B (1) and D (3) → Node (BD) with frequency 4.

2. Combine A (5) and BD (4) → Node (ABD) with frequency 9.

3. Combine C (6) and ABD (9) → Root node (CABD) with frequency 15.

Step 3: Assign Huffman Codes

(Root)

/ \

C ABD

0 / \

A BD

10 / \

B(110) D(111)

Huffman Codes:

• A → 10

• B → 110

• C→0

• D → 111

Step 4: Encoding

Original string: BCAADDDCCACACAC


Encoded string: 110 0 10 10 111 111 111 0 0 10 0 10 0 10 0

Complexity Analysis

• Building the Huffman Tree: O(n log n) (using a min-heap)

• Encoding/Decoding: O(n)

• Overall Complexity: O(n log n)

Advantages of Huffman Encoding

✔ Optimal Compression for character-based frequency encoding.


✔ Lossless Compression, meaning no data loss.
✔ Widely Used in ZIP, JPEG, and MP3 file formats.

Limitations
Requires prior frequency analysis, which may not be efficient for streaming data.
Not ideal for small text files where overhead can be significant.

Huffman Encoding is an essential technique for efficient data compression and is widely used in real-
world applications like file compression and data transmission.

Matrix chain multiplication:

Example

Given Matrices

A1(10×20)A_1 (10 \times 20)A1(10×20), A2(20×30)A_2 (20 \times 30)A2(20×30), A3(30×40)A_3 (30
\times 40)A3(30×40), A4(40×30)A_4 (40 \times 30)A4(40×30)

Possible Parenthesization

1. ((A₁ A₂) A₃) A₄

2. (A₁ (A₂ A₃)) A₄

3. A₁ ((A₂ A₃) A₄)

4. (A₁ A₂) (A₃ A₄)

5. A₁ (A₂ (A₃ A₄))

Each parenthesization results in a different number of scalar multiplications.

Complexity Analysis

• Subproblems: O(n^2)

• Transitions: O(n)

• Overall Time Complexity: O(n^3)

Advantages of MCM

✔ Efficient for moderate matrix sequences


✔ Avoids redundant calculations using DP
✔ Used in compiler optimizations, neural networks, and parallel computing

Limitations

Does not reduce actual matrix multiplications, only optimizes order


Not suitable for very large nnn due to O (n^3) complexity
Breadth-First Search (BFS) and Depth-First Search (DFS)

Both BFS and DFS are fundamental graph traversal algorithms used in various applications like
pathfinding, web crawling, network analysis, and AI search problems.

1. Breadth-First Search (BFS)

Concept

• Explores all neighbors at the current level before moving to the next level.

• Uses a queue (FIFO structure) for traversal.

Algorithm Steps

1. Start from a source node.

2. Enqueue the source node and mark it as visited.

3. While the queue is not empty:

o Dequeue a node.

o Process the node.

o Enqueue all unvisited neighbors.

4. Repeat until all nodes are visited.

Time & Space Complexity

• Time Complexity: O(V+E)O (Vertices + Edges)

• Space Complexity: O(V)(for queue and visited list)

2. Depth-First Search (DFS)

Concept

• Explores a path deeply before backtracking.

• Uses a stack (LIFO structure) or recursion.

Algorithm Steps

1. Start from a source node.

2. Mark the node as visited.

3. Visit an unvisited neighbor and recursively apply DFS.

4. Backtrack if no unvisited neighbors remain.

5. Repeat until all nodes are visited.


Time & Space Complexity

• Time Complexity: O(V+E)

• Space Complexity: O(V) (for recursion or stack storage)

Use Cases

When to Use BFS?

✔ Finding the shortest path in an unweighted graph (e.g., Google Maps).


✔ Level-order traversal in trees.
✔ Network broadcasting (e.g., spreading information).
✔ Web crawling (visiting all pages level-wise).

When to Use DFS?

✔ Pathfinding in mazes/games.
✔ Cycle detection in graphs.
✔ Topological Sorting in DAGs (Dependency Resolution).
✔ Backtracking algorithms (e.g., Sudoku Solver, N-Queens).

You might also like