DAA Exit Exam Tutorial 2017
DAA Exit Exam Tutorial 2017
P R E PA R E D B Y Y I T A Y I S H L E M A .
EMAIL: [email protected]
Yo u T u b e : h t t p s : / / w w w . y o u t u b e . c o m / @ y i t u l e m a
COURSE OBJECTIVE
2
A L G O R I T H M A N D A L G O R I T H M A N A LY S I S
3
PROPERTIES OF ALGORITHM
Effectiveness: Every step in the algorithm must be effective i.e. every step
should do some work. The operations must be feasible and clearly defined.
Correctness: An algorithm should correctly solve the problem it is designed to
address.
4
A N A LY S IS O F A LG O R ITHM
2. for(i=1;i<n; i=i+2)
5. for(i=1;i<n; i=i*2)
{
{
statment; statment;
} T(n)=O(n) }
S(p)=O(1) T(n)=O(logn) and S(p)=O(1) 5
A N A LY S IS O F R EC U R S I VE A LG O R ITHM
A recurrence relation is an equation that defines a sequence of values recursively, where each term in
the sequence is defined in terms of one or more previous terms.
Methods for Solving Recurrences:
Substitution Method: Make educated guesses for solutions and substitute back into the original
equation.
Types:
Forward Substitution: Generates terms from the initial condition.
Backward Substitution: Derives formulas by substituting previous terms recursively.
Iteration Method: Expand the recurrence by substituting previous terms until a pattern emerges.
6
CONT’D
Recursion Tree Method: Visualize the recursive calls as a tree to analyze the total work done at each
level.
Master Theorem: For relations of the form
T(n)=aT(n/b)+f(n), provides a framework to determine asymptotic behavior based on
coefficients and exponents.
Generating Functions: Use algebraic manipulation of generating functions to find closed-form
expressions.
Matrix Method: Applicable to linear homogeneous recurrences, using matrices to find solutions.
Characteristic Equation Method: Derive the characteristic equation from the recurrence and find its
roots for solutions.
7
ANALYSIS OF RECURSIVE ALGORITHM
Recursive algorithms solve problems by breaking them down into smaller sub problems of the same type. They call
themselves with modified arguments until reaching a base case, where the solution is straightforward.
Key Concepts:
Base Case: The simplest instance of the problem that can be solved directly without further recursion. Essential for
preventing infinite loops.
Recursive Case: The part of the algorithm where the function calls itself with modified parameters to approach the
base case.
Performance Analysis:
Time Complexity: Often analyzed using recurrence relations, which express the running time in terms of the
running time of smaller instances.
Common forms:
T(n) = aT(n-b) + f(n) Decreasing Function w/r a & b >0 and f(n)=Θ(nk),
T(n) = aT(n/b) + f(n) Dividing Function w/r a>=1 and b>1 , f(n) = θ(nklogpn)
Space Complexity: Recursive calls typically use stack space, leading to a space complexity of O(n) in the worst case,
where n is the depth of recursion.
8
Example of Recursive Analysis
Decreasing Function
1. T(n)=T(n-1)+1
2. T(n)=T(n-1)+n
3. T(n)=T(n-1)+n^2
4. T(n)=2T(n-1)+1
Dividing Function
1. T(n)=T(n/2)+1
2. T(n)=2T(n/2)+n
3. T(n)=4T(n/2)+n
4. T(n)=8T(n/2)+n
5. T(n)=2T(n/2)+n/log(n)
9
Classes of time function
In asymptotic analysis, different classes or categories of time functions are used to describe the
growth rate or complexity of algorithms. The most commonly encountered classes of time
functions include: https://youtu.be/Bpfo0daYAz8
Constant Time (O(1)): Algorithms with constant time complexity have a fixed number of
operations that do not depend on the input size.
Logarithmic Time (O(log n)): Algorithms with logarithmic time complexity have runtimes that
grow logarithmically with the input size. As the input size increases, the runtime increases, but at a
decreasing rate. Efficient algorithms that divide the problem space in half at each step, such as
binary search on a sorted array or certain tree-based operations, often exhibit logarithmic
complexity.
10
Cont’d
Linear Time (O(n)): Algorithms with linear time complexity have runtimes that grow linearly
with the input size.
Linearithmic Time (O(n log n)): describes algorithms that have runtimes that grow
proportionally to the size of the input multiplied by the logarithm of that size. These algorithms
typically involve a combination of linear and logarithmic operations and are commonly seen in
efficient sorting algorithms like merge sort and quicksort.
Polynomial Time (O(n^k)): Algorithms with polynomial time complexity have runtimes that
grow as a power of the input size.
Exponential Time (O(2^n)): Algorithms with exponential time complexity have runtimes that
grow exponentially with the input size. These algorithms are generally considered inefficient and
may become infeasible for even moderately sized inputs.
11
Cont’d
The above classes represent some of the most commonly encountered time complexities. It's
important to note that there are other classes as well, such as sub-linear (O(sqrt(n))), super-linear
(O(nloglogn)), and factorial (O(n!)), which represent different growth rates and complexities.
The order below reflects the increasing growth rates of these time complexities as the input size
increases. As we move from left to right, the algorithms become less efficient in terms of their
runtime or resource usage.
O(1) < O(log n) < O(n) < O(n log n) < O(n^2) < O(2^n)
By understanding the relative growth rates, we can make informed decisions about algorithm
selection and optimization for different problem sizes.
12
CHAPTER TWO: DIVIDE AND CONQUERE ALGORITHM
is a powerful strategy used to solve complex problems by breaking them down into simpler
sub-problems, solving each sub-problem independently, and combining their solutions to
solve the original problem.
Key Steps in Divide and Conquer:
Divide: Split the problem into smaller sub-problems, ideally of the same type.
Conquer: Recursively solve each sub-problem. If the sub-problems are small enough,
solve them directly.
Combine: Merge the solutions of the sub-problems to form a solution to the original
problem.
13
CONT’D
14
APPLICATIONS OF DIVIDE AND CONQUER
Divide and Conquer is widely used in various fields of computer science and
mathematics due to its effectiveness in improving algorithm efficiency. The following
applications illustrate its versatility and power in solving a range of problems:
Applications of divide and conquer
1. Merge Sort: A sorting algorithm that divides the array into halves, sorts each half, and merges them
back together.
Complexity:
T(n)= O(nlogn)
Space= O(n) for the temporary array during merging
Application: Efficiently sorts large datasets, making it suitable for applications requiring stable sorting.
15
CONT’D
2. Quick Sort: Selects a pivot element, partitions the array into elements less than and greater than the pivot,
and recursively sorts the partitions.
Complexity:
T(n)=O(nlogn)->Average case, O(n^2)->Worst case
Space=O(log n) for the recursion stack
Application: Widely used for sorting in various programming environments due to its average-case
efficiency.
3. Binary Search: Searches for an element in a sorted array by repeatedly dividing the search
interval in half.
Complexity:
Time: O(logn)
Space: O(1) (iterative version)
Application: Efficiently locates elements in sorted datasets, often used in database searches.
16
CONT’D
17
O BJ EC TI VE Q U ES TI ONS
1. Which of the following best describes the 3. Which of the following is a characteristic of
Divide and Conquer strategy? problems suitable for Divide and Conquer?
A) Solving problems without recursion A) They must have a single solution
B) Breaking a problem into smaller sub- B) They can be solved using a single recursive call
problems, solving them independently, and
combining their results C) They can be divided into independent sub-problems
C) Using iterative loops to solve problems D) They require iterative solutions only
D) A linear approach to problem-solving 4. What is the space complexity of the Merge Sort
algorithm??
2. What is the first step in the Divide and A) O(1)
Conquer process? B) O(log n)
A) Combine C) O(n)
D) O(n log n)
B) Conquer
C) Divide
18
D) Analyze
O BJ EC TI VE Q U ES TI ONS
5. What is the main advantage of using Divide 7. What is the primary disadvantage of the recursive
and Conquer algorithms? approach in Divide and Conquer algorithms?
A) They are always faster than iterative A) It requires more memory for the recursion stack
algorithms B) It is less efficient than iterative methods
B) They simplify complex problems by breaking C) It cannot solve complex problems
them into manageable parts
D) It does not allow parallel
C) They require less memory than other
8. Which of the following problems is NOT typically
algorithms
solved using Divide and Conquer?
D) They eliminate the need for recursion
A) Merge Sort
6. In terms of time complexity, what is the best-
B) Binary Search
case scenario for Quick Sort??
A) O(1) C) Finding the Maximum Value in an Array
B) O(log n) D) Simple Linear Search
C) O(n)
19
D) O(n log n)
C H A P TER TH R EE : G R EED Y M ETH O D
The Greedy Method is an algorithmic approach that makes a sequence of choices, each of which
looks best at the moment, with the hope of finding a global optimum.
It builds up a solution piece by piece, always choosing the next piece that offers the most immediate
benefit.
Nature of Problems Suitable for Greedy Method:
Optimal Substructure: Problems that can be broken down into smaller sub-problems, where an
optimal solution to the overall problem can be constructed from optimal solutions to its sub-problems.
Greedy Choice Property: A problem exhibits this property if a local optimum choice leads to a global
optimum solution.
No Future Considerations: The choice made at each step does not depend on future choices,
allowing for a straightforward, step-by-step approach.
20
A P P L I CAT ION S O F T H E G RE E D Y ME T H O D
1. Activity Selection Problem: Select the maximum number of activities that don't overlap in
time.
Analysis: Sort activities by finish time; iteratively select the next activity that starts after the
last selected one.
Time Complexity:
T(n)= O(nlogn) (due to sorting).
Space=O(n) for sorting the sorted list
Optimality: The greedy choice yields an optimal solution.
21
CONT’D
2. Fractional Knapsack Problem: Maximize the total value of items that can be carried in a knapsack,
where items can be broken into smaller pieces.
Analysis: Sort items by value-to-weight ratio; take as much of the highest ratio items as possible.
Complexity:
T(n)=O(nlogn) (due to sorting).
Space Complexity: O(1) (if the sorting is done in-place).
Optimality: The greedy approach guarantees the best solution.
3. Prim's and Kruskal's Algorithms: Find the minimum spanning tree (MST) in a weighted graph.
Analysis:
Prim's starts from a vertex and grows the MST; (adjacent vertex, min weight edge, without cycle)
Kruskal's sorts edges and adds them if they don't form a cycle.(min cost edge, without cycle)
Time Complexity: O(ElogE) for Kruskal's (due to sorting edges).
Space Complexity: O(V) (for storing the MST or disjoint sets).
Optimality: Both algorithms yield an optimal MST
22
CONT’D
24
O BJ EC TI VE Q U ES TI ONS
6. Which of the following is NOT a characteristic of the 8. Which of the following problems can be effectively solved using
Greedy Method? a Greedy approach?
A) Local optimum selection A) Traveling Salesman Problem
B) Optimal substructure B) Fractional Knapsack Problem
C) Backtracking C) Longest Common Subsequence
D) Immediate benefit selection D) Edit Distance
7. In Huffman Coding, what is the main goal of the Greedy 5. ??????
approach?
A) Minimize the number of characters
B) Minimize the average length of the code
C) Maximize the number of unique characters
D) Sort characters alphabetically
25
C H A P TER F O U R : D Y N A M I C P R O GR AMMI NG
is a method for solving complex problems by breaking them down into simpler sub-problems. It is
applicable when the problem can be divided into overlapping sub-problems that can be solved
independently.
Used to solve Optimization problem
Nature of Problems:
Optimal Substructure: A problem exhibits optimal substructure if an optimal solution can be
constructed from optimal solutions of its sub-problems.
Overlapping Sub-problems: DP is used when the same sub-problems are solved multiple times. Instead
of solving them repeatedly, DP stores the results of sub-problems to avoid redundant calculations.
Memoization: A top-down approach where you solve a problem recursively and store the results of sub-
problems in a table (usually a dictionary or array) to avoid recalculating them.
Tabulation: A bottom-up approach where you solve all possible sub-problems and store their results in a
table. This method typically involves filling out a table iteratively.
26
A P P L I CAT ION S O F T H E D Y N A MIC P RO G RA MMIN G
2. 0/1 Knapsack Problem: Given weights and values of items, determine the maximum value that can be
carried without exceeding the weight limit.
DP Approach: Build a 2D table where dp[i][w] represents the maximum value achievable with the first i
items and a weight limit w.
Time Complexity: O(n*W)
Where n is the number of items and W is the weight limit.
Space Complexity: O(n*W) Requires a table of size n x W.
3. Matrix Chain Multiplication: Determine the optimal order to multiply a chain of matrices to minimize the
number of scalar multiplications.
DP Approach: Use a table dp[i][j] to store the minimum number of multiplications needed to multiply
matrices from i to j.
Time Complexity: O(n³), Where n is the number of matrices.
Space Complexity: O(n²), Requires a table of size n x n.
27
A P P L I CAT ION S O F T H E D Y N A MIC P RO G RA MMIN G
4. Subset Sum Problem: Determine if a subset of numbers can sum up to a specific target.
DP Approach: Create a Boolean table dp[i][j] where i represents the first i numbers and j the target sum,
indicating whether the target can be achieved.
Time Complexity: O(n*W), Where n is the number of elements and W is the target sum.
Space Complexity: O(n*W), Requires a table of size n x W.
5. Floyd-Warshall Algorithm: A dynamic programming approach that systematically updates path lengths.
Time Complexity: O(V³), Where V is the number of vertices in the graph.
Space Complexity: O(V²), Requires a distance matrix of size V x V.
6.Fibonacci Sequence: Calculate the nth Fibonacci number.
DP Approach: Use an array to store previously computed Fibonacci numbers.
Time Complexity: O(n), Each Fibonacci number is computed once.
Space Complexity: O(n), Requires storage for the Fibonacci sequence up to n.
28
S U MMA RY O F T I ME A N D S PA CE CO MP L E X I T Y F O R D Y N A MIC P RO G RA MMING
A P P L I CAT ION S
29
C H A P TER F I V E: BA C K TR A CKIN G
Backtracking is an algorithmic technique used for solving problems incrementally by exploring all
possible configurations. It builds candidates for solutions and abandons them as soon as it determines
that they cannot lead to a valid solution.
Nature of Problems
Backtracking is particularly effective for:
Combinatorial Problems: Problems that require generating combinations or permutations of a
set.
Constraint Satisfaction Problems: Problems where solutions must satisfy certain constraints
(e.g., Sudoku, N-Queens).
Optimization Problems: Problems that seek the best solution among many possibilities (e.g., the
Traveling Salesman Problem).
Decision Problems: Problems that require making a series of choices with the potential for
backtracking.
30
K EY TER M I N OLOGIES
State: A specific configuration or condition of the problem at a given point in the solution
process. Each state represents a partial solution.
Solution Space: The set of all possible configurations or solutions to the problem.
Backtracking explores this space to find valid solutions.
Recursive Backtracking: A method where the algorithm recursively attempts to build a
solution by exploring each option. If a solution path leads to a dead end, the algorithm
backtracks to the previous state.
Pruning: The process of eliminating branches in the solution space that cannot lead to a
valid solution. Pruning reduces the number of configurations that need to be explored,
improving efficiency.
31
C O N T’ D
Constraint: Conditions that must be met for a solution to be considered valid. Constraints
guide the backtracking process to avoid exploring invalid states.
Depth-First Search (DFS): A common strategy used in backtracking algorithms, where the
solution is built one step at a time by exploring as far down a branch as possible before
backtracking.
All Solutions vs. First Solution: Some backtracking problems require finding all possible
solutions, while others stop at the first valid solution encountered.
Base Case: A condition that stops the recursion. It usually represents a complete solution or
an invalid state that requires backtracking.
32
A P P L I CAT ION S O F T H E BA CK T RA CK I N G
N-Queens Problem: Placing N queens on an N×N chessboard so that no two queens threaten
each other.
Sudoku Solver: Filling a 9x9 grid such that each column, row, and 3x3 sub-grid contains all
digits from 1 to 9.
Permutations and Combinations: Generating all possible arrangements of a set of elements.
Subset Sum Problem: Finding subsets of a set that sum to a specific target.
Graph Coloring: Assigning colors to the vertices of a graph so that no two adjacent vertices
share the same color.
Hamiltonian cycle detection: used to find Hamiltonian cycle in a graph((a cycle that starts
from the given vertex visit each vertex only once and returns do its destination)
33
S U M M ARY
34
S U M M ARY Q U ES TI ONS Q U ES TI ONS
1. What is the primary technique used in dynamic 3. What is the time complexity of the 0/1 Knapsack problem
programming to avoid redundant calculations? using dynamic programming?
A) Greedy approach A) O(n)
B) Backtracking B) O(nW)
C) Memoization C) O(2^n)
D) Divide and conquer D) O(n²)
2. Which of the following problems can be solved 4. In the Fibonacci sequence problem using dynamic
using dynamic programming? programming, what is the space complexity if using an
A) Traveling Salesman Problem iterative approach?
5. Which of the following problems is NOT typically 3. What technique is used in backtracking to eliminate invalid
solved using backtracking? solutions?
A) N-Queens Problem A) Pruning
B) Sudoku Solver B) Sorting
C) Coin Change Problem C) Merging
D) Hamiltonian Path Problem D) Binning
2. During backtracking, what action is taken when a 4. Which of the following statements is true about dynamic
partial solution is found to be invalid? programming?
A) Continue without changes A) It is always faster than backtracking.
B) Accept the solution B) It guarantees an optimal solution.
C) Backtrack to the previous state C) It can only be used for polynomial-time problems.
D) Ignore the current state D) It is primarily used for sorting data.
36