Approach: Aspect Greedy Algorithm/Method Dynamic Programming
Approach: Aspect Greedy Algorithm/Method Dynamic Programming
Key Idea:
Even though some operations in a data structure (like an array or stack) may be expensive, they don't happen
very often. Over a sequence of operations, the average cost per operation is much lower. Amortized analysis
calculates this average, called the amortized cost.
The aggregate method calculates the total cost of n operations and then divides it by n to find the amortized
cost.
Example:
If n operations take a total of T(n) time, then the amortized cost per operation is:
Amortized Cost=T(n)\n
In the accounting method, each operation is assigned an amortized cost (which may be more or less than
the actual cost). If an operation’s amortized cost is more than its actual cost, the excess is stored as a credit
to pay for future expensive operations.
Example:
For a stack, a push operation might cost 1 unit, and a costly resize might cost 4 units. Assign an amortized
cost of 2 to each push, storing 1 unit as credit to pay for future resizes.
3️⃣ Potential Method
The potential method uses a potential function to track the "stored energy" or credits in the data structure.
The potential represents the difference between the actual cost and amortized cost.
If the potential increases, the amortized cost is higher. If it decreases, it means that we’re using up stored
credits.
Let’s use the Aggregate Method to show how inserting n elements in a dynamic array has an amortized
cost of O(1):
Each time the array doubles, it copies all elements to a new array.
Number of copies when inserting n elements = less than 2n.
Total cost = O(n) (copying plus inserting).
Amortized cost per insert = O(n)/n = O(1).
Summary Table
Probabilistic analysis is a technique used in algorithm analysis that calculates the expected running time
or cost of an algorithm under some probability distribution of the inputs.
Instead of analyzing the worst-case or average-case based on all possible inputs, probabilistic analysis uses
actual probabilities of inputs or events to determine the expected (average) performance of an algorithm.
Problem Statement:
Goal:Estimate the expected number of times you will hire a new candidate.
Describe Aggregate method applied on dynamic array for amortized
analysis.
Conclusion
✅ Assign a fictitious (amortized) cost to each operation (more than its actual cost for some operations).
✅ Store the extra cost as a “credit” or “bank balance” to pay for expensive operations later (like resizing).
This ensures the amortized cost per operation stays low and predictable.
Let’s say we assign an amortized cost of 3 to each insertion (more than the actual cost of 1). Let’s see why
this works.
Actual cost = 1
We charge 3
So, we have 2 credits left per normal insertion.
When we double the array, we copy all existing elements (which is expensive).
Let’s calculate:
Shortest-Path Properties and Their Proofs
When finding the shortest paths in a graph (like with Dijkstra’s or Bellman-Ford), several fundamental
properties are often proved. Here are some key properties and their proofs:
What is dynamic programming? Write steps to solve a problem using dynamic programming? Also discuss the
elements of dynamic programming. How it affects the time complexity of a problem?
Dynamic programming is a powerful algorithm design technique used to solve complex problems by
breaking them down into simpler overlapping subproblems. It works by storing the results of subproblems to
avoid redundant computations, making it highly efficient.
✅ Optimal Substructure: The problem can be broken down into smaller subproblems which can be solved
independently.
✅ Overlapping Subproblems: The same subproblems are solved multiple times.
Dynamic programming is often used for optimization problems, like finding the shortest path, maximum
profit, or minimum cost.
Identify how the solution to the main problem relates to solutions of smaller subproblems.
Write a mathematical relation expressing the solution of the main problem in terms of the smaller
subproblems.
Use the stored values (either table or cache) to get the final answer efficiently.
🔍 Elements of Dynamic Programming
✅ Subproblems
✅ Overlapping Subproblems
✅ Optimal Substructure
✅ Memoization / Tabulation
Without dynamic programming, recursive solutions may recompute subproblems multiple times, leading to
exponential time complexity (e.g., O(2^n) in the Fibonacci series).
🔸 Memoization:
🔸 Tabulation:
For example:
✅ Fibonacci sequence
In computational complexity theory, problems are classified based on how difficult they are to solve. Two
important classes of problems are NP-Complete and NP-Hard.
NP stands for the set of decision problems (yes/no questions) whose solutions can be verified in
polynomial time.
In other words, if you’re given a solution, you can check it quickly (polynomial time).
📌 NP-Complete Problems
📌 NP-Hard Problems
📌 Examples
✅ NP-Complete Problems:
SAT (Boolean Satisfiability Problem): Finding an assignment of variables to make a Boolean expression true.
Subset Sum Problem: Given a set of integers, is there a subset that adds up to a specific sum?
Traveling Salesman Problem (Decision Version): Is there a tour with cost less than or equal to a given number?
✅ NP-Hard Problems:
📌 Importance of NP-Completeness
📌 Conclusion
NP-Complete: Hardest in NP; solving any NP-Complete problem in polynomial time solves all NP problems efficiently.
NP-Hard: As hard as NP-Complete or harder; not necessarily in NP.
These concepts are fundamental in studying computational intractability and algorithm design.