0% found this document useful (0 votes)
4 views12 pages

Approach: Aspect Greedy Algorithm/Method Dynamic Programming

The document compares Greedy Algorithms and Dynamic Programming, highlighting their approaches, optimality guarantees, and problem types. It also discusses Amortized Analysis and Probabilistic Analysis, detailing methods like Aggregate, Accounting, and Potential methods for analyzing algorithm performance. Additionally, it covers Dynamic Programming's steps, NP-Completeness, and NP-Hard problems, emphasizing their significance in computational complexity.

Uploaded by

savancold2018
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views12 pages

Approach: Aspect Greedy Algorithm/Method Dynamic Programming

The document compares Greedy Algorithms and Dynamic Programming, highlighting their approaches, optimality guarantees, and problem types. It also discusses Amortized Analysis and Probabilistic Analysis, detailing methods like Aggregate, Accounting, and Potential methods for analyzing algorithm performance. Additionally, it covers Dynamic Programming's steps, NP-Completeness, and NP-Hard problems, emphasizing their significance in computational complexity.

Uploaded by

savancold2018
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 12

Aspect Greedy Algorithm/Method Dynamic Programming

Breaks the problem down into smaller, simpler


Makes the locally optimal choice at each step subproblems and solves each subproblem just
Approach
with the hope of finding a global optimum. once, storing the solution and is usually done
bottom-up.
Does not guarantee a globally optimal solution
Optimality Guarantees an optimal solution by considering all
for all problems. Often used for problems where
Guarantee possible cases.
local optimality leads to a global optimum.
Overlapping Typically, it does not address overlapping Explicitly solves and caches solutions to
Subproblems subproblems. overlapping subproblems.
Subproblem Avoids recomputing subproblems because it Reuses subproblem solutions; thus, no need to
Recomputation doesn't generally recognize subproblems. recompute them.
Used for optimization problems that can be
Often used for optimization problems where a
Problem Type broken down into overlapping subproblems with
series of decisions leads to a solution.
the optimal substructure property.
Forward-looking: makes decisions based only Backward-looking: decisions are made by
Technique on current information without regard to future considering future consequences due to optimal
consequences. substructure.
0/1 Knapsack, Shortest Path (Floyd-Warshall,
Example Fractional Knapsack, Minimum Spanning Trees
Bellman-Ford algorithms), Fibonacci number
Problems (Kruskal's, Prim's algorithms).
series.
Caching Uses memoization or tabulation to cache and
Does not require caching past decisions.
Decisions retrieve results of subproblems.
Used when a problem has a “greedy choice Used when the problem can be divided into
Usage
property,” allowing for a local optimum to be stages, with a decision at each stage affecting
Scenarios
chosen at each step. the outcome of the solution.
It can be less efficient than greedy if every
Time
Often more efficient in terms of time complexity. possible solution is computed (though proper use
Complexity
of memoization mitigates this).
Recursion Iteration
A technique in which the function calls itself in Iteration is a technique that repetitively
Definition its body to solve the problem, typically breaking executes a code block until the condition
into smaller and more manageable sub-problems. is unmet.
The iteration format includes
Syntax There is a termination condition specified. initialization, condition, and
increment/decrement of a variable.
Control
Function call stack Looping Constructs (for, while etc.)
Structure
Problem-
Solving Divide and Conquer Sequential Execution
Approach
Time Higher due to the overhead of maintaining the Lower than recursion due to the absence
Complexity function call stack. of the overhead of function calls.
Space Generally lower than recursion due to the
Higher than iteration.
Complexity absence of the overhead function calls.
Memory Uses more memory Uses less memory
Faster than Recursion as it uses less
Speed Slower than Iteration
memory
Application For Functions For Loops
Preferred when tasks require repetition of
Tree/Graph Traversal or the problem that can be
Usage similar tasks over elements in a structure
broken down into similar sub-problems
(example, list).
Amortized analysis is a technique used in computer science to average out the cost of operations over a
sequence of operations, even when some individual operations might be expensive. Instead of focusing on
the worst-case cost of a single operation, amortized analysis provides a more realistic measure of an
algorithm’s performance over the long run.

Key Idea:

Even though some operations in a data structure (like an array or stack) may be expensive, they don't happen
very often. Over a sequence of operations, the average cost per operation is much lower. Amortized analysis
calculates this average, called the amortized cost.

Why Use Amortized Analysis?

 To get a more accurate picture of the average performance.


 To show that, although individual operations may be costly, they happen rarely enough that the
average cost remains low.

Methods of Amortized Analysis:

There are three main methods:

1️⃣ Aggregate Method

The aggregate method calculates the total cost of n operations and then divides it by n to find the amortized
cost.

Example:
If n operations take a total of T(n) time, then the amortized cost per operation is:

Amortized Cost=T(n)\n

2️⃣ Accounting Method (or Banker’s Method)

In the accounting method, each operation is assigned an amortized cost (which may be more or less than
the actual cost). If an operation’s amortized cost is more than its actual cost, the excess is stored as a credit
to pay for future expensive operations.

 Assign a higher amortized cost to cheap operations to save credits.


 Use these credits to pay for expensive operations later.

Example:
For a stack, a push operation might cost 1 unit, and a costly resize might cost 4 units. Assign an amortized
cost of 2 to each push, storing 1 unit as credit to pay for future resizes.
3️⃣ Potential Method

The potential method uses a potential function to track the "stored energy" or credits in the data structure.
The potential represents the difference between the actual cost and amortized cost.

The amortized cost for an operation is:

Amortized Cost=Actual Cost+(Potential After−Potential Before)

If the potential increases, the amortized cost is higher. If it decreases, it means that we’re using up stored
credits.

Example: Dynamic Array (Array Doubling)

Let’s use the Aggregate Method to show how inserting n elements in a dynamic array has an amortized
cost of O(1):

 Each time the array doubles, it copies all elements to a new array.
 Number of copies when inserting n elements = less than 2n.
 Total cost = O(n) (copying plus inserting).
 Amortized cost per insert = O(n)/n = O(1).

Summary Table

Method Key Idea


Aggregate Average total cost across all operations
Accounting Overcharge cheap ops, save credits for costly ops
Potential Use potential function to track credits/debits

What is Probabilistic Analysis?

Probabilistic analysis is a technique used in algorithm analysis that calculates the expected running time
or cost of an algorithm under some probability distribution of the inputs.

Instead of analyzing the worst-case or average-case based on all possible inputs, probabilistic analysis uses
actual probabilities of inputs or events to determine the expected (average) performance of an algorithm.

✅ It considers how likely each input or outcome is.


✅ It calculates the expected value (average cost or performance) using the probabilities.

The Hiring Problem:

The Hiring Problem is a classic example where probabilistic analysis shines!

Problem Statement:

 You’re interviewing n candidates, one at a time in random order.


 You want to hire the best candidate seen so far.
 Every time you see a better candidate, you fire the current hire and hire the new one (with a cost
each time).

Goal:Estimate the expected number of times you will hire a new candidate.
Describe Aggregate method applied on dynamic array for amortized
analysis.

Conclusion

✔️ Even though some insertions (when resizing) are expensive,


✔️ The amortized cost per insertion is O(1) (specifically 2 in this example).
✔️ This is why dynamic arrays are efficient on average, despite occasional expensive resizing.
Describe Accounting method applied on dynamic array for amortized analysis.

Scenario: Dynamic Array (Doubling Strategy)

We’re dealing with a dynamic array where:

 Inserting an element usually costs 1 unit (normal insert).


 When the array is full, we double its size and copy all existing elements (expensive resize).

💡 Key Idea of the Accounting Method

In the accounting method, we:

✅ Assign a fictitious (amortized) cost to each operation (more than its actual cost for some operations).
✅ Store the extra cost as a “credit” or “bank balance” to pay for expensive operations later (like resizing).

This ensures the amortized cost per operation stays low and predictable.

🧮 How to Apply Accounting Method for Dynamic Arrays

Let’s say we assign an amortized cost of 3 to each insertion (more than the actual cost of 1). Let’s see why
this works.

🔢 Steps for Each Insertion

1⃣ Normal insertion cost:

 Actual cost = 1
 We charge 3
 So, we have 2 credits left per normal insertion.

2️⃣ Use of credits for resizing (doubling):

 When we double the array, we copy all existing elements (which is expensive).

Let’s calculate:
Shortest-Path Properties and Their Proofs

When finding the shortest paths in a graph (like with Dijkstra’s or Bellman-Ford), several fundamental
properties are often proved. Here are some key properties and their proofs:
What is dynamic programming? Write steps to solve a problem using dynamic programming? Also discuss the
elements of dynamic programming. How it affects the time complexity of a problem?

🌟 What is Dynamic Programming?

Dynamic programming is a powerful algorithm design technique used to solve complex problems by
breaking them down into simpler overlapping subproblems. It works by storing the results of subproblems to
avoid redundant computations, making it highly efficient.

The core idea of dynamic programming is:

✅ Optimal Substructure: The problem can be broken down into smaller subproblems which can be solved
independently.
✅ Overlapping Subproblems: The same subproblems are solved multiple times.

Dynamic programming is often used for optimization problems, like finding the shortest path, maximum
profit, or minimum cost.

📋 Steps to Solve a Problem Using Dynamic Programming

1⃣ Characterize the Structure of the Optimal Solution

 Identify how the solution to the main problem relates to solutions of smaller subproblems.

2️⃣ Define the State

 Choose a set of parameters (variables) that represent the subproblem.

3️⃣ Formulate the Recurrence Relation

 Write a mathematical relation expressing the solution of the main problem in terms of the smaller
subproblems.

4️⃣ Determine the Base Cases

 Identify and handle trivial cases (smallest possible subproblems).

5️⃣ Implement a Memoization or Tabulation Strategy

 Memoization: Top-down approach using recursion with caching.


 Tabulation: Bottom-up approach filling a table iteratively.

6️⃣ Compute the Final Solution

 Use the stored values (either table or cache) to get the final answer efficiently.
🔍 Elements of Dynamic Programming

The key elements are:

✅ Subproblems

 The smaller versions of the problem that can be solved independently.

✅ Overlapping Subproblems

 Subproblems are solved repeatedly.

✅ Optimal Substructure

 An optimal solution to the overall problem contains optimal solutions to subproblems.

✅ Memoization / Tabulation

 Memoization stores solutions to subproblems to avoid recomputation (top-down).


 Tabulation fills a table iteratively (bottom-up).

🧮 Effect on Time Complexity

Without dynamic programming, recursive solutions may recompute subproblems multiple times, leading to
exponential time complexity (e.g., O(2^n) in the Fibonacci series).

With dynamic programming:

🔸 Memoization:

 Reduces time complexity by storing subproblem solutions.


 Time complexity = number of unique subproblems × time per subproblem.

🔸 Tabulation:

 Precomputes all subproblems iteratively.


 Similar improvement in time complexity.

For example:

✅ Fibonacci sequence

 Naive recursion: O(2^n)


 Dynamic Programming: O(n)

✅ Longest Common Subsequence

 Naive recursion: Exponential


 Dynamic Programming: O(m × n), where m and n are the lengths of the input strings.
(b) Write a short Note on NP Completeness and NP – Hard Problem.

🌟 NP-Completeness and NP-Hard Problems

In computational complexity theory, problems are classified based on how difficult they are to solve. Two
important classes of problems are NP-Complete and NP-Hard.

📌 NP (Nondeterministic Polynomial time)

 NP stands for the set of decision problems (yes/no questions) whose solutions can be verified in
polynomial time.
 In other words, if you’re given a solution, you can check it quickly (polynomial time).

📌 NP-Complete Problems

 A problem is NP-Complete if:


1. It is in NP (solutions can be verified in polynomial time).
2. Every problem in NP can be polynomial-time reduced to it.
 These are the hardest problems in NP.
 If you find a polynomial-time algorithm for any NP-Complete problem, it means all problems in
NP can be solved in polynomial time (P=NP).

📌 NP-Hard Problems

 A problem is NP-Hard if it is at least as hard as the hardest problems in NP.


 Unlike NP-Complete, NP-Hard problems do not have to be decision problems.
 They might not have a polynomial-time verification process.
 Example: The Halting Problem is NP-Hard but not in NP.

📌 Examples

✅ NP-Complete Problems:

 SAT (Boolean Satisfiability Problem): Finding an assignment of variables to make a Boolean expression true.
 Subset Sum Problem: Given a set of integers, is there a subset that adds up to a specific sum?
 Traveling Salesman Problem (Decision Version): Is there a tour with cost less than or equal to a given number?

✅ NP-Hard Problems:

 Halting Problem: Determining whether a program halts or runs forever.


 Traveling Salesman Problem (Optimization Version): Finding the shortest tour (not just verifying a given tour length).

📌 Importance of NP-Completeness

 NP-Complete problems help us understand the boundaries of efficient computation.


 So far, no polynomial-time algorithms have been found for any NP-Complete problem, and it’s still unknown whether
P=NP.
 Researchers focus on approximation algorithms or heuristics for practical solutions.

📌 Conclusion

 NP-Complete: Hardest in NP; solving any NP-Complete problem in polynomial time solves all NP problems efficiently.
 NP-Hard: As hard as NP-Complete or harder; not necessarily in NP.
 These concepts are fundamental in studying computational intractability and algorithm design.

You might also like