- Algorithm And Complexity - Uma Madam
- Algorithm And Complexity - Uma Madam
- Algorithm And Complexity - Uma Madam
In the context of Algorithm and Complexity, Big O (O), Big Omega (Ω), and
Big Theta (Θ) are mathematical notations used to describe the asymptotic
behavior of algorithms in terms of their performance (time or space) relative to
input size, typically represented by n. These notations help analyze and compare
the efficiency of algorithms.
1. Big O (O) Notation:
Big O notation describes the upper bound of an algorithm's growth rate,
representing the worst-case scenario. It provides an upper limit on the running time,
ensuring that the algorithm will not exceed a certain time complexity as the input
grows.
Definition: If a function f(n) is O(g(n)), then for large enough n, f(n) will
never grow faster than a constant multiple of g(n).
Graph Representation: The graph of an O(n) function is always above the
graph of the actual runtime, representing the worst-case scenario.
Example: If an algorithm runs in O(n²), this means that, at most, its running
time will increase quadratically with the input size n.
2. Big Omega (Ω) Notation:
Big Omega notation describes the lower bound of an algorithm's growth rate,
representing the best-case scenario or the minimum time required for the algorithm
to complete, regardless of input size.
Definition: If a function f(n) is Ω(g(n)), then for large enough n, f(n) will not
grow slower than a constant multiple of g(n).
Graph Representation: The graph of an Ω(n) function lies below the graph
of the actual runtime, indicating the minimum time the algorithm will take.
Example: If an algorithm is Ω(n), this means that, at best, the algorithm will
take at least linear time in the best-case scenario.
3. Big Theta (Θ) Notation:
Big Theta notation describes a tight bound, meaning the function grows at the
same rate both in the upper and lower bounds. It represents the exact asymptotic
behavior of an algorithm.
Definition: If a function f(n) is Θ(g(n)), then for large enough n, f(n) is
bounded both above and below by constant multiples of g(n). This means
f(n) and g(n) grow at the same rate asymptotically.
Graph Representation: The graph of Θ(n) function matches the runtime
curve, both above and below, indicating that the algorithm's time complexity
is tightly bound.
Example: If an algorithm has a complexity of Θ(n log n), it means the
algorithm will always take n log n time for large inputs, both in the worst and
best cases.
Graphical Representation:
O(n): The line is an upper bound, typically above the actual running time.
Ω(n): The line is a lower bound, typically below the actual running time.
Θ(n): The line tightly bounds the actual running time, indicating that the
algorithm’s time complexity is sandwiched between the upper and lower
bounds.
Summary in Graph:
O(g(n)): The function will not grow faster than g(n) (upper bound).
Ω(g(n)): The function will not grow slower than g(n) (lower bound).
Θ(g(n)): The function grows at the same rate as g(n) asymptotically (exact
bound).
These notations are essential for understanding and analyzing algorithm efficiency,
particularly as the input size grows larger.
These notations indicate that both functions grow linearly and quadratically,
respectively, as nnn increases.
If f(n)f(n)f(n) and nlogban^{\log_b a}nlogba grow at the same rate, the time
complexity is determined by both the recursive calls and the cost of combining the
results. The solution is:
The algorithm runs in O(V⋅E)O(V \cdot E)O(V⋅E), where VVV is the number of
vertices and EEE is the number of edges.
Example:
Consider the following graph with 5 vertices (A, B, C, D, E) and the edges:
A → B (weight 6)
A → D (weight 7)
B → C (weight 5)
B → D (weight 8)
B → E (weight -4)
C → E (weight 2)
D → B (weight -3)
D → E (weight 9)
E → D (weight 7)
We want to find the shortest paths from vertex A.
1. Initialization:
o Distance from A to A = 0, all others are infinity:
Distance = {A: 0, B: ∞, C: ∞, D: ∞, E: ∞}
2. First Relaxation (V-1 = 4 times):
o After relaxing all edges, the distance array might look like: Distance =
{A: 0, B: 6, C: 11, D: 7, E: 2}
3. Second Relaxation:
o Continue relaxing the edges, updating the shortest distances. After all
relaxations, the shortest distances would be: Distance = {A: 0, B: 6, C:
11, D: 7, E: 2}
4. Negative Cycle Check:
o If any edge can still be relaxed after V−1V-1V−1 iterations, it indicates
a negative weight cycle. In this example, no negative cycle exists.
Final Shortest Paths from A:
A → A: 0
A → B: 6
A → C: 11
A → D: 7
A → E: 2
Thus, Bellman-Ford finds the shortest paths from A to all other vertices, even with
negative edge weights, and detects negative weight cycles if present.
A()
int i, j, k, n;
for(i = 1; i <= n; i++) // Outer loop (1)
for(j = 1; j <= i; j++) // Middle loop (2)
for(k = 1; k <= 100; k++) // Inner loop (3)
printf("RAVI");
Analysis:
1. Outer loop (i loop):
o The outer loop runs from i=1i = 1i=1 to i=ni = ni=n, so it executes n
times.
2. Middle loop (j loop):
o The middle loop runs from j=1j = 1j=1 to j=ij = ij=i. For each value of
iii, the number of iterations of the jjj-loop is iii.
o Therefore, the middle loop runs iii times for each iteration of iii, which
gives a total of:
Total iterations of the middle loop=∑i=1ni=n(n+1)2=O(n2)\text{Total iterations of
the middle loop} = \sum_{i=1}^{n} i = \frac{n(n + 1)}{2} =
O(n^2)Total iterations of the middle loop=i=1∑ni=2n(n+1)=O(n2)
3. Inner loop (k loop):
o The inner loop runs from k=1k = 1k=1 to k=100k = 100k=100, which
is constant and always executes 100 times for each iteration of the jjj-
loop.
Total Time Complexity:
For each iteration of the outer loop (which runs nnn times), the middle loop
runs iii times, and the inner loop runs 100 times.
Therefore, the total number of printf("RAVI") executions is:
Total executions of printf=∑i=1ni×100=100×n(n+1)2=O(n2)\text{Total
executions of printf} = \sum_{i=1}^{n} i \times 100 = 100 \times \
frac{n(n+1)}{2} = O(n^2)Total executions of printf=i=1∑n
i×100=100×2n(n+1)=O(n2)
Thus, the time complexity of the algorithm is O(n2)O(n^2)O(n2).
Dijkstra's Algorithm is used to find the shortest path from a source node to all
other nodes in a weighted graph. It works by iteratively selecting the node with the
smallest tentative distance, exploring its neighbors, and updating their distances.
Steps of Dijkstra's Algorithm:
1. Initialization:
o Set the tentative distance for the source node as 0 and for all other
nodes as infinity.
o Mark all nodes as unvisited.
o A → C (1)
o B → C (2)
o B → D (5)
o C → D (6)
2. Visit A:
o Neighbors of A: B and C
o Mark A as visited.
o Mark C as visited.
o Mark D as visited.
10) Generate variable length Huffman code for following set of frequencies.
a:30 b:5 c:2 d:28 e:13 f:10 g:8 h:20 1:6
css
Copy code
[(7, cb), (10, f), (13, e), (14, 1g), (20, h), (28, d), (30, a)]
4. Combine the next two least frequent nodes:
o Combine cb (7) and f (10) into a new node (17, cbf).
css
Copy code
[(13, e), (14, 1g), (17, cbf), (20, h), (28, d), (30, a)]
5. Combine the next two least frequent nodes:
o Combine e (13) and 1g (14) into a new node (27, e1g).
css
Copy code
[(17, cbf), (20, h), (27, e1g), (28, d), (30, a)]
6. Combine the next two least frequent nodes:
o Combine cbf (17) and h (20) into a new node (37, cbfh).
css
Copy code
[(27, e1g), (28, d), (30, a), (37, cbfh)]
7. Combine the next two least frequent nodes:
o Combine e1g (27) and d (28) into a new node (55, e1gd).
css
Copy code
[(30, a), (37, cbfh), (55, e1gd)]
8. Combine the next two least frequent nodes:
o Combine a (30) and cbfh (37) into a new node (67, acbfh).
css
Copy code
[(55, e1gd), (67, acbfh)]
9. Combine the last two nodes:
o Combine e1gd (55) and acbfh (67) into the root node (122, root).
css
Copy code
[(122, root)]
Final Huffman Tree Structure:
scss
Copy code
root(122)
/ \
e1gd(55) acbfh(67)
/ \ / \
e(13) 1g(14) cbf(17) h(20)
/ \ /
cb(7) f(10) g(8)
/ \
c(2) b(5)
Huffman Codes:
a: 0
h: 11
d: 101
e: 1000
1: 1001
g: 11010
f: 11011
c: 111010
b: 111011
Final Huffman Code for each symbol:
a 30 0
b 5 111011
c 2 111010
d 28 101
e 13 1000
f 10 11011
g 8 11010
h 20 11
1 6 1001
Thus, the Huffman codes for each symbol are generated based on their frequencies,
with the most frequent symbols getting shorter codes and the less frequent ones
getting longer codes.