0% found this document useful (0 votes)
56 views32 pages

Adsa Practice Set 1 Sol by Me

1. Dijkstra's algorithm is an algorithm for finding the shortest paths in a graph. It works by maintaining a priority queue of tentative distances and iteratively selecting the unvisited node with the smallest tentative distance. 2. Dynamic programming problems have optimal substructure and overlapping subproblems. It works by breaking problems down into subproblems and storing results to avoid recomputing them. Divide and conquer breaks problems into independent subproblems while greedy algorithms make locally optimal choices at each step. 3. The longest common subsequence problem finds the longest string that is a subsequence of two input strings. It can be solved in O(m*n) time and space using dynamic programming.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
56 views32 pages

Adsa Practice Set 1 Sol by Me

1. Dijkstra's algorithm is an algorithm for finding the shortest paths in a graph. It works by maintaining a priority queue of tentative distances and iteratively selecting the unvisited node with the smallest tentative distance. 2. Dynamic programming problems have optimal substructure and overlapping subproblems. It works by breaking problems down into subproblems and storing results to avoid recomputing them. Divide and conquer breaks problems into independent subproblems while greedy algorithms make locally optimal choices at each step. 3. The longest common subsequence problem finds the longest string that is a subsequence of two input strings. It can be solved in O(m*n) time and space using dynamic programming.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 32

1.

BFS
https://favtutor.com/blogs/breadth-first-search-python

DFS
https://favtutor.com/blogs/depth-first-search-python
2.
3. DIJKASTRA ALGORITHM
Sol by making Table OF GIVEN PROBLEM:--
3.chat gpt answer
Dijkstra's algorithm is a widely used algorithm for finding the shortest paths between nodes in a graph. It
was conceived by computer scientist Edsger W. Dijkstra in 1956. Here is a step-by-step description of
Dijkstra's algorithm:

Initialize: Set the distance to the source node as 0 and all other distances to infinity. Create a priority queue
(or min-heap) to keep track of the nodes and their tentative distances.
Mark the source node: Mark the source node as visited and enqueue it with a distance of 0.
Explore neighbors: For the current node, consider all of its neighbors and calculate their tentative distances
through the current node. Compare the newly calculated tentative distance to the current assigned value and
assign the smaller one. For example, if the current node A is marked with a distance of 6, and the edge
connecting it to a neighbor B has a length of 2, then the distance to B through A is 6 + 2 = 8. If the distance
to B was previously greater than 8, then update it to 8.
Select the next node: After considering all the neighbors of the current node, mark the current node as
visited. Select the unvisited node with the smallest tentative distance from the priority queue as the next
"current node," and go back to step 3.
Repeat: Continue until all nodes are visited or the destination node is marked as visited.

The final result will be the shortest path from the source node to every other node in the graph. The
algorithm maintains a set of tentative distances for each node, and at each step, it selects the node with the
smallest tentative distance to explore next

Algorithm for Dijkstra’s Algorithm:


1. Mark the source node with a current distance of 0 and the rest with infinity.
2. Set the non-visited node with the smallest current distance as the current node.
3. For each neighbor, N of the current node adds the current distance of the adjacent node with
the weight of the edge connecting 0->1. If it is smaller than the current distance of Node, set it
as the new current distance of N.
4. Mark the current node 1 as visited.
5. Go to step 2 if there are any nodes are unvisited.
4. ALL PAIR SHORTEST PATH (FLOYD –WARSHALL ALGORITHM)
Pseudo code:--
5. Dynamic programming, divide and conquer, and greedy algorithms are three different approaches to
solving optimization problems in computer science. Here are the key elements of dynamic programming and
a brief comparison with divide and conquer and greedy approaches:

Dynamic Programming:

Optimal Substructure: The problem can be broken down into smaller, overlapping subproblems. The
solution to the overall problem can be constructed from the solutions of its subproblems.
Overlapping Subproblems: The problem can be divided into subproblems that are reused several times
during the computation.
Memoization or Tabulation: To avoid redundant calculations, dynamic programming often uses
memoization (top-down approach, storing results of recursive calls) or tabulation (bottom-up approach,
building solutions iteratively).
Dynamic Programming Equation: Problems solved using dynamic programming typically involve
constructing a recurrence relation or dynamic programming equation that expresses the solution to a
problem in terms of solutions to its subproblems.

Divide and Conquer:

Divide: Break the problem into smaller subproblems of the same type.
Conquer: Solve the subproblems recursively.
Combine: Combine the solutions of the subproblems to get the solution of the original problem.
No Overlapping Subproblems: Divide and conquer does not necessarily assume overlapping subproblems.
Each subproblem is solved independently.

Greedy Approach:

Greedy Choice Property: A global optimum can be reached by selecting a local optimum (greedy choice) at
each step.
No Backtracking: Once a decision is made, it is never reconsidered.
Doesn't Always Guarantee Optimal Solution: Greedy algorithms make locally optimal choices, but these
choices may not always lead to a globally optimal solution.

Comparison:

Optimality: Dynamic programming guarantees an optimal solution, as it systematically explores and stores
solutions to subproblems. Greedy algorithms, on the other hand, may or may not provide an optimal
solution.
Efficiency: Dynamic programming can be more efficient than naive recursive solutions by avoiding
redundant computations through memoization or tabulation. Divide and conquer can also be efficient, but it
may involve redundant computations.
Use Cases: Dynamic programming is well-suited for problems with overlapping subproblems, while divide
and conquer is often applied to problems that can be divided into independent subproblems. Greedy
algorithms are used when a local optimal choice leads to a globally optimal solution.

In summary, while dynamic programming focuses on solving problems with optimal substructure and
overlapping subproblems, divide and conquer divides problems into independent subproblems, and greedy
algorithms make locally optimal choices without necessarily ensuring a globally optimal solution. Each
approach has its strengths and is suitable for different types of problems.
11. COUNTING SORT
RADIX SORT:
7. Longest Common Subsequence (LCS)
The Longest Common Subsequence (LCS) problem is a classic problem in dynamic programming
and string processing. Given two sequences (strings), the task is to find the length of the longest
subsequence that is present in both sequences. A subsequence is a sequence that appears in the
same relative order but not necessarily contiguous. In other words, it is the longest string of
characters that appear in the same order in both strings.

For example, given two strings "ABCDGH" and "AEDFHR", a common subsequence is "ADH" with a
length of 3. The LCS problem aims to find this longest common subsequence.

Time Complexity: The time complexity of the dynamic programming solution for the LCS problem is O(m *
n), where m and n are the lengths of the two input strings. This is because the algorithm fills in a 2D table
with dimensions (m + 1) x (n + 1), and each entry takes constant time to compute based on previous entries.

Space Complexity: The space complexity is also O(m * n) as we use a 2D table to store intermediate results.
0/1 knapsack
Fractional knapsack

You might also like