Job Sequencing
Job Sequencing
Job Sequencing Problem is a classic greedy algorithm problem in computer science and operations
research. The goal is to schedule jobs in such a way that maximum profit is earned while respecting
job deadlines.
📌 Problem Statement
Objective: Schedule jobs to maximize total profit. Only one job can be scheduled at a time.
🧠 Greedy Approach
2. Use a time slot array to track free time slots up to the maximum deadline.
3. For each job, find the latest available slot before its deadline.
4. If a slot is found, assign the job and update the slot as occupied.
🧮 Example
Jobs = [{id: 'A', deadline: 2, profit: 100}, {id: 'B', deadline: 1, profit: 19}, {id: 'C', deadline: 2, profit: 27},
{id: 'D', deadline: 1, profit: 25}, {id: 'E', deadline: 3, profit: 15}]
Sorted by profit:
Step-by-step Scheduling:
E → slot 3 → [C, A, E]
Selected Jobs: C, A, E
Total Profit: 27 + 100 + 15 = 142
✅ Time Complexity
🛠 Python Code
python
CopyEdit
class Job:
self.id = job_id
self.deadline = deadline
self.profit = profit
def job_sequencing(jobs):
total_profit = 0
if not slots[slot]:
slots[slot] = True
job_sequence[slot] = job.id
total_profit += job.profit
break
# Example
jobs = [Job('A', 2, 100), Job('B', 1, 19), Job('C', 2, 27), Job('D', 1, 25), Job('E', 3, 15)]
Prim’s Algorithm
Prim’s Algorithm is a greedy algorithm used to find the Minimum Spanning Tree (MST) of a
connected, undirected, weighted graph.
A Minimum Spanning Tree connects all the vertices in a graph with the minimum possible total edge
weight, without forming any cycles.
✅ Key Concepts
Output: A tree that connects all vertices with the least total weight.
3. Repeatedly add the cheapest edge that connects a vertex in the MST to a vertex outside it.
📌 Data Structures
Priority Queue / Min-Heap: Selects the next minimum weight edge efficiently.
🧮 Example
Graph:
mathematica
CopyEdit
2 3
A ---- B ------- C
| | |
6 8 7
| | |
D ---- E ------- F
9 5
Vertices: A, B, C, D, E, F
MST (Prim's) might include edges:
A–B (2)
A–D (6)
B–E (8)
E–F (5)
C–F (7)
Total weight = 2 + 6 + 8 + 5 + 7 = 28
python
CopyEdit
import heapq
visited = set()
total_cost = 0
while min_heap:
weight, u = heapq.heappop(min_heap)
if u not in visited:
visited.add(u)
total_cost += weight
for v, w in graph[u]:
if v not in visited:
return total_cost
graph = {
⏱ Time Complexity
Huffman Coding
Huffman Coding is a popular lossless data compression algorithm. It is used to compress data
efficiently by assigning shorter codes to more frequent characters and longer codes to less frequent
characters, thereby reducing the overall size of the data.
🔧 How It Works
1. Frequency Count:
o Create a leaf node for each character and insert it into the min-heap based on
frequency.
The frequency of the new node is the sum of the two nodes.
4. Assign Codes:
5. Encode Data:
o Replace each character in the input with its corresponding Huffman code.
6. Decode Data:
o Use the Huffman Tree to convert the binary code back to characters.
📦 Example
Frequencies:
A: 2
B: 3
C: 1
Huffman Tree:
css
CopyEdit
(*,6)
/ \
(*,3) B:3
/ \
C:1 A:2
Codes:
B: 1
A: 01
C: 00
Encoded Output:
01 1 1 00 01 1 → 011100011
✅ Advantages
Widely used in applications like ZIP files, JPEG, and MP3 formats.
🚫 Limitations
Master Theorem
The Master Theorem provides a straightforward way to analyze the time complexity of divide-and-
conquer algorithms, especially those that follow a recurrence of the form:
f(n): the cost of dividing and combining the subproblems (non-recursive work)
Then:
f(n)=Θ(nlogba⋅logkn) for some k≥0f(n) = \Theta(n^{\log_b a} \cdot \log^k n) \text{ for some } k \geq
0f(n)=Θ(nlogba⋅logkn) for some k≥0
Then:
and
a⋅f(n/b)≤c⋅f(n) for some c<1 and large na \cdot f(n/b) \leq c \cdot f(n) \text{ for some } c < 1 \
text{ and large } na⋅f(n/b)≤c⋅f(n) for some c<1 and large n
Then:
T(n)=Θ(f(n))T(n) = \Theta(f(n))T(n)=Θ(f(n))
Example 1
a = 2, b = 2, f(n) = n
→ Case 2 ⇒
T(n) = Θ(n log n)
Example 2
a = 8, b = 2, f(n) = n²
nlog28=n3n^{\log_2 8} = n^3nlog28=n3
Example 3
a = 2, b = 2, f(n) = n²
nlog22=nn^{\log_2 2} = nnlog22=n
→ Case 3 ⇒
→ f(n)=Ω(n1+ε)f(n) = \Omega(n^{1 + \varepsilon})f(n)=Ω(n1+ε) and regularity condition holds
T(n) = Θ(n²)
Doesn’t handle non-polynomial f(n) like n log n when it doesn’t match cases
Doesn’t handle recurrences with multiple different subproblem sizes (e.g., T(n) = T(n/2) +
T(n/3) + n)
O(n3)O(n^3)O(n3)
Strassen reduced the number of multiplications needed, improving the time complexity to:
📌 Key Idea
Instead of performing 8 multiplications (as in the traditional method), Strassen uses only 7 recursive
multiplications with extra additions and subtractions.
🧠 Algorithm Overview
Given two n×nn \times nn×n matrices A and B, divide them into 4 submatrices:
Compute 7 products:
M1=(A11+A22)
(B11+B22)M2=(A21+A22)B11M3=A11(B12−B22)M4=A22(B21−B11)M5=(A11+A12)B22M6=(A21−A11
)(B11+B12)M7=(A12−A22)(B21+B22)\begin{aligned} M_1 &= (A_{11} + A_{22})(B_{11} + B_{22}) \\
M_2 &= (A_{21} + A_{22})B_{11} \\ M_3 &= A_{11}(B_{12} - B_{22}) \\ M_4 &= A_{22}(B_{21} -
B_{11}) \\ M_5 &= (A_{11} + A_{12})B_{22} \\ M_6 &= (A_{21} - A_{11})(B_{11} + B_{12}) \\ M_7 &=
(A_{12} - A_{22})(B_{21} + B_{22}) \\ \end{aligned}M1M2M3M4M5M6M7=(A11+A22)(B11+B22
)=(A21+A22)B11=A11(B12−B22)=A22(B21−B11)=(A11+A12)B22=(A21−A11)(B11+B12)=(A12−A22)
(B21+B22)
🕒 Time Complexity
python
CopyEdit
import numpy as np
n = A.shape[0]
if n == 1:
return A * B
mid = n // 2
C11 = M1 + M4 - M5 + M7
C12 = M3 + M5
C21 = M2 + M4
C22 = M1 - M2 + M3 + M6
print(strassen(A, B))
❗ Notes
For small matrices or odd dimensions, padding and switching to naive multiplication may be
better.