0% found this document useful (0 votes)
4 views5 pages

Lecture 2

The document discusses amortized analysis, a technique for evaluating the average time complexity of algorithms over a sequence of operations, highlighting its importance in understanding algorithm performance beyond worst-case scenarios. It covers key methods such as the aggregate, accounting, and potential methods, with examples including dynamic arrays and augmented stacks. The aggregate method is emphasized for its simplicity in calculating average costs, demonstrating that the amortized cost per operation can remain low even when some operations are expensive.

Uploaded by

foreafcbeta
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views5 pages

Lecture 2

The document discusses amortized analysis, a technique for evaluating the average time complexity of algorithms over a sequence of operations, highlighting its importance in understanding algorithm performance beyond worst-case scenarios. It covers key methods such as the aggregate, accounting, and potential methods, with examples including dynamic arrays and augmented stacks. The aggregate method is emphasized for its simplicity in calculating average costs, demonstrating that the amortized cost per operation can remain low even when some operations are expensive.

Uploaded by

foreafcbeta
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 5

Lecture 2: Analysis and Design of Algorithms

• Amortized Analysis
• Applications and Examples
• Type of Methods of Amortized Analysis
• Aggregate Method

Amortized analysis is a technique used in computer science to analyze the average time complexity
of an algorithm over a sequence of operations, rather than just the worst-case time complexity of a
single operation. This approach provides a more accurate understanding of an algorithm's
performance in practice, especially when the worst-case scenario is rare or when an expensive
operation is offset by a series of cheaper operations.

Key Concepts in Amortized Analysis

1. Aggregate Method: Calculates the total cost of n operations and then divides by n to find the
average cost per operation.

2. Accounting Method: Assigns a different "amortized" cost to each operation, ensuring that the total
amortized cost is at least as much as the total actual cost. This often involves overcharging some
operations to account for more expensive ones.

3. Potential Method: Uses a potential function to represent the "stored energy" or "potential" within
the data structure. The change in potential helps to balance out the costs of expensive operations
over a sequence of operations.

Example: Dynamic Array

Consider a dynamic array that doubles in size when it runs out of space. The amortized analysis helps
to show that the average time complexity of inserting an element is O(1), even though resizing the
array takes O(n) time.

1. Without Amortized Analysis:

- Inserting an element typically takes O(1).

- When resizing, copying all elements takes O(n).

2. With Amortized Analysis:

- Over a sequence of insertions, most operations are O(1), and only a few are O(n).

- Using the aggregate method, the total cost of n insertions, including resizing, is O(n), leading to
an average of O(1) per insertion.

Applications of Amortized Analysis

Amortized analysis is widely used in analyzing data structures and algorithms, such as:

- Dynamic Arrays: As explained above, dynamic arrays use amortized analysis to provide efficient
average-case performance for insertions.
- Splay Trees: A type of self-adjusting binary search tree where operations have good, amortized
complexity.

- Union-Find Data Structures: Used in disjoint-set operations where the union and find operations
have near-constant amortized time.

Amortized analysis provides a deeper understanding of an algorithm's efficiency by considering the


average performance over time, rather than focusing solely on worst-case scenarios.

Aggregate Method of Amortized Analysis

The aggregate method is one of the simplest techniques in amortized analysis. It involves calculating
the total cost of a sequence of operations and then dividing by the number of operations to determine
the average (amortized) cost per operation. This method is particularly useful when the cost of
individual operations can vary significantly, but the total cost over many operations is predictable
and manageable.

Steps in the Aggregate Method:

1. Total Cost Calculation: Compute the total cost of performing n operations.

2. Amortized Cost Calculation: Divide the total cost by the number of operations n.

This average cost is the amortized cost per operation.

Example: Dynamic Array

Consider the dynamic array, also known as an array list or vector, which resizes by doubling its
capacity when it runs out of space. Here's how the aggregate method can be applied to this scenario:

1. Insertions Without Resizing: Each insertion in an array without resizing takes O(1) time.

2. Resizing: When the array is full, resizing it takes O(n) time because all n elements need to be copied
to the new array.

To find the amortized cost, let's analyze a series of insertions:

- Suppose we start with an empty array of initial capacity 1.

- Insertion 1: Insert the first element. (Cost: 1)

- Insertion 2: Insert the second element, causing a resize (from capacity 1 to 2), and copy the first
element. (Cost: 1 for insertion + 1 for copying = 2)

- Insertion 3: Insert the third element, causing another resize (from capacity 2 to 4), and copy 2
elements. (Cost: 1 for insertion + 2 for copying = 3)

- Insertion 4: Insert the fourth element. (Cost: 1)

- Insertion 5: Insert the fifth element, causing a resize (from capacity 4 to 8), and copy 4 elements.
(Cost: 1 for insertion + 4 for copying = 5)
- And so on...

The total cost for n insertions can be summarized as follows:

- Each element is moved during the resize operations. The total number of moves is the sum of a
geometric series: 1 + 2 + 4 + 8 + ….. + n/2, which is less than 2n.

Thus, the total cost of n insertions, including all resizing operations, is O(n).

Amortized Cost Calculation:

- Total cost of n operations: O(n)

- Number of operations: n

- Amortized cost per operation: O(n) / n = O(1)

Therefore, the amortized cost of each insertion operation in a dynamic array is O(1).

The aggregate method provides a straightforward way to demonstrate that the average cost of
operations over time remains low, even if some individual operations are expensive. This method is
especially useful for analyzing data structures and algorithms where occasional costly operations
are balanced out by many cheaper ones.

Aggregate Method for Augmented Stack with Push, Pop, and Multipop

In this analysis, we'll consider an augmented stack that supports the following operations:

1. Push(x): Push element `x` onto the stack.

2. Pop(): Remove the top element from the stack.

3. Multipop(k): Remove the top `k` elements from the stack, or all elements if there are fewer than
`k`.

We'll use the aggregate method to determine the amortized cost of these operations.

Definitions:

1. Push(x): Push element `x` onto the stack.

2. Pop(): Remove the top element from the stack.

3. Multipop(k): Remove the top `k` elements from the stack.

Cost of Operations:

- Push(x): Takes O(1) time.


- Pop(): Takes O(1) time.

- Multipop(k): Takes O(min(k, n)) time, where n is the current number of elements in the stack.

Aggregate Analysis:

To analyze the amortized cost, consider a sequence of n operations consisting of `Push`, `Pop`,
and `Multipop`.

1. Total Number of Operations: Let n be the total number of operations.

2. Number of Pushes: Let P be the number of `Push` operations.

3. Number of Pops: Let Q be the number of `Pop` operations.

4. Number of Multipops: Let M be the number of `Multipop` operations.

Key Points:

- Each `Push` operation adds one element to the stack.

- Each `Pop` operation removes one element from the stack.

- Each `Multipop(k)` operation removes at most k elements from the stack, but no more than the
number of elements present.

The total number of elements added to the stack by `Push` operations is P.

The total number of elements removed by `Pop` and `Multipop` operations is at most Q + ΣMi where
Mi is the number of elements removed by the i-th `Multipop` operation.

Since the stack cannot have a negative number of elements, we have:

P ≥ Q + ΣMi

Amortized Cost Calculation:

1. Push(x):

- Cost: O(1)

2. Pop():

- Cost: O(1)

3. Multipop(k):

- Worst-case cost: O(k)

- However, each element in the stack can be removed only once, either by a `Pop` or a `Multipop`
operation.
Total cost for n operations:

- Each element pushed can only be popped or multipopped once.

- Therefore, the total cost of all `Pop` and `Multipop` operations is O(P).

Thus, the total cost of n operations is:

Total cost= O(P) + O(n)

Since each `Push` operation has a cost of O(1) and there are P pushes, the total cost due to `Push`
operations is O(P). Each `Pop` and `Multipop` operation costs O(1) amortized, and there are n such
operations, so the total cost due to `Pop` and `Multipop` operations is O(n).

Combining these, the total cost is O(P + n). But since P ≤ n, the total cost is O(n).

Amortized Cost Per Operation:

Amortized cost = Total cost/n= O(n)/n= O(1)

Using the aggregate method, we have shown that the amortized cost per operation (including
`Push`, `Pop`, and `Multipop`) in an augmented stack is O(1). This means that, on average, each
operation takes constant time, even though some individual operations (like `Multipop`) may take
longer.

You might also like