Lecture 2
Lecture 2
• Amortized Analysis
• Applications and Examples
• Type of Methods of Amortized Analysis
• Aggregate Method
Amortized analysis is a technique used in computer science to analyze the average time complexity
of an algorithm over a sequence of operations, rather than just the worst-case time complexity of a
single operation. This approach provides a more accurate understanding of an algorithm's
performance in practice, especially when the worst-case scenario is rare or when an expensive
operation is offset by a series of cheaper operations.
1. Aggregate Method: Calculates the total cost of n operations and then divides by n to find the
average cost per operation.
2. Accounting Method: Assigns a different "amortized" cost to each operation, ensuring that the total
amortized cost is at least as much as the total actual cost. This often involves overcharging some
operations to account for more expensive ones.
3. Potential Method: Uses a potential function to represent the "stored energy" or "potential" within
the data structure. The change in potential helps to balance out the costs of expensive operations
over a sequence of operations.
Consider a dynamic array that doubles in size when it runs out of space. The amortized analysis helps
to show that the average time complexity of inserting an element is O(1), even though resizing the
array takes O(n) time.
- Over a sequence of insertions, most operations are O(1), and only a few are O(n).
- Using the aggregate method, the total cost of n insertions, including resizing, is O(n), leading to
an average of O(1) per insertion.
Amortized analysis is widely used in analyzing data structures and algorithms, such as:
- Dynamic Arrays: As explained above, dynamic arrays use amortized analysis to provide efficient
average-case performance for insertions.
- Splay Trees: A type of self-adjusting binary search tree where operations have good, amortized
complexity.
- Union-Find Data Structures: Used in disjoint-set operations where the union and find operations
have near-constant amortized time.
The aggregate method is one of the simplest techniques in amortized analysis. It involves calculating
the total cost of a sequence of operations and then dividing by the number of operations to determine
the average (amortized) cost per operation. This method is particularly useful when the cost of
individual operations can vary significantly, but the total cost over many operations is predictable
and manageable.
2. Amortized Cost Calculation: Divide the total cost by the number of operations n.
Consider the dynamic array, also known as an array list or vector, which resizes by doubling its
capacity when it runs out of space. Here's how the aggregate method can be applied to this scenario:
1. Insertions Without Resizing: Each insertion in an array without resizing takes O(1) time.
2. Resizing: When the array is full, resizing it takes O(n) time because all n elements need to be copied
to the new array.
- Insertion 2: Insert the second element, causing a resize (from capacity 1 to 2), and copy the first
element. (Cost: 1 for insertion + 1 for copying = 2)
- Insertion 3: Insert the third element, causing another resize (from capacity 2 to 4), and copy 2
elements. (Cost: 1 for insertion + 2 for copying = 3)
- Insertion 5: Insert the fifth element, causing a resize (from capacity 4 to 8), and copy 4 elements.
(Cost: 1 for insertion + 4 for copying = 5)
- And so on...
- Each element is moved during the resize operations. The total number of moves is the sum of a
geometric series: 1 + 2 + 4 + 8 + ….. + n/2, which is less than 2n.
Thus, the total cost of n insertions, including all resizing operations, is O(n).
- Number of operations: n
Therefore, the amortized cost of each insertion operation in a dynamic array is O(1).
The aggregate method provides a straightforward way to demonstrate that the average cost of
operations over time remains low, even if some individual operations are expensive. This method is
especially useful for analyzing data structures and algorithms where occasional costly operations
are balanced out by many cheaper ones.
Aggregate Method for Augmented Stack with Push, Pop, and Multipop
In this analysis, we'll consider an augmented stack that supports the following operations:
3. Multipop(k): Remove the top `k` elements from the stack, or all elements if there are fewer than
`k`.
We'll use the aggregate method to determine the amortized cost of these operations.
Definitions:
Cost of Operations:
- Multipop(k): Takes O(min(k, n)) time, where n is the current number of elements in the stack.
Aggregate Analysis:
To analyze the amortized cost, consider a sequence of n operations consisting of `Push`, `Pop`,
and `Multipop`.
Key Points:
- Each `Multipop(k)` operation removes at most k elements from the stack, but no more than the
number of elements present.
The total number of elements removed by `Pop` and `Multipop` operations is at most Q + ΣMi where
Mi is the number of elements removed by the i-th `Multipop` operation.
P ≥ Q + ΣMi
1. Push(x):
- Cost: O(1)
2. Pop():
- Cost: O(1)
3. Multipop(k):
- However, each element in the stack can be removed only once, either by a `Pop` or a `Multipop`
operation.
Total cost for n operations:
- Therefore, the total cost of all `Pop` and `Multipop` operations is O(P).
Since each `Push` operation has a cost of O(1) and there are P pushes, the total cost due to `Push`
operations is O(P). Each `Pop` and `Multipop` operation costs O(1) amortized, and there are n such
operations, so the total cost due to `Pop` and `Multipop` operations is O(n).
Combining these, the total cost is O(P + n). But since P ≤ n, the total cost is O(n).
Using the aggregate method, we have shown that the amortized cost per operation (including
`Push`, `Pop`, and `Multipop`) in an augmented stack is O(1). This means that, on average, each
operation takes constant time, even though some individual operations (like `Multipop`) may take
longer.