0% found this document useful (0 votes)
20 views12 pages

Daa Interview

Uploaded by

nigamshreya2004
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
20 views12 pages

Daa Interview

Uploaded by

nigamshreya2004
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 12

What is Algorithm?

A finite set of instruction that specifies a sequence of operation is to be carried out in


order to solve a specific problem or class of problems is called an Algorithm.

Analysis of algorithm
The analysis is a process of estimating the efficiency of an algorithm. There are two
fundamental parameters based on which we can analysis the algorithm

o Space Complexity: The space complexity can be understood as the amount of


space required by an algorithm to run to completion.
o Time Complexity: Time complexity is a function of input size n that refers to
the amount of time needed by an algorithm to run to completion.
Asymptotic Notations
Asymptotic notations are the mathematical notations used to describe the
running time of an algorithm when the input tends towards a particular
value or a limiting value.

Asymptotic notations are mathematical tools to represent the time complexity of algorithms
for asymptotic

Big-O Notation (O-notation)


Big-O notation represents the upper bound of the running time of an
algorithm. Thus, it gives the worst-case complexity of an algorithm.

Omega Notation (Ω-notation)


Omega notation represents the lower bound of the running time of an
algorithm. Thus, it provides the best case complexity of an algorithm.

Theta Notation (Θ-notation)


Theta notation encloses the function from above and below. Since it
represents the upper and the lower bound of the running time of an
algorithm, it is used for analyzing the average-case complexity of an
algorithm.
Recurrence Relation
A recurrence is an equation or inequality that describes a function in terms of its values
on smaller inputs. To solve a Recurrence Relation means to obtain a function defined
on the natural numbers that satisfy the recurrence.

Tower of Hanoi
1. It is a classic problem where you try to move all the disks from one peg to another
peg using only three pegs.

2. Initially, all of the disks are stacked on top of each other with larger disks under the
smaller disks.

3. You may move the disks to any of three pegs as you attempt to relocate all of the
disks, but you cannot place the larger disks over smaller disks and only one disk can
be transferred at a time.

This problem can be easily solved by Divide & Conquer algorithm

n
Time = 2 -1

Hashing
Hashing is the transformation of a string of character into a usually shorter fixed-length
value or key that represents the original string.

Why we need Hashing?


Suppose we have 50 employees, and we have to give 4 digit key to each employee (as
for security), and we want after entering a key, direct user map to a particular position
where data is stored.

If we give the location number according to 4 digits, we will have to reserve 0000 to
9999 addresses because anybody can use anyone as a key. There is a lot of wastage.

In order to solve this problem, we use hashing which will produce a smaller value of
the index of the hash table corresponding to the key of the user.
Dynamic Programming
Dynamic programming is a technique that breaks the problems into sub-problems,
and saves the result for future purposes so that we do not need to compute the result
again. The subproblems are optimized to optimize the overall solution is known as
optimal substructure property. The main use of dynamic programming is to solve
optimization problems. Here, optimization problems mean that when we are trying to
find out the minimum or the maximum solution of a problem. The dynamic
programming guarantees to find the optimal solution of a problem if the solution
exists.

1. Top-Down Approach:
 In the top-down approach, you start with the larger picture or
overarching goal and then break it down into smaller, more
manageable components or tasks.
 It begins with the formulation of a broad overview or strategy and then
proceeds to focus on the details.
 It is akin to starting from the highest level of abstraction and gradually
zooming into the finer details.
 In software development, the top-down approach often involves
starting with the main module or the main function and then breaking
it down into submodules or functions.
Example: Imagine you are planning a project to build a house. You start by
envisioning the overall design, layout, and functionality of the house. Then,
you break down the project into phases like construction, plumbing, electrical
wiring, and interior design.
2. Bottom-Up Approach:
 In contrast, the bottom-up approach starts with the individual
components or details and gradually builds up to form a complete
system or solution.
 It emphasizes starting with the specific elements and then integrating
them to form larger structures or solutions.
 It often involves grassroots efforts where individual contributions or
ideas are aggregated to create a comprehensive solution.
 In software development, the bottom-up approach might involve
developing small, reusable components or modules and then
integrating them to build larger systems.
Example: Consider assembling a jigsaw puzzle. You start by identifying and
putting together individual pieces based on their shapes and colors. Gradually,
as you connect more pieces, the overall picture emerges.
Matrix chain multiplication is a specific problem in the field of
computer science and mathematics that deals with the efficient multiplication of matrices.
Given a chain of matrices, the goal is to determine the most efficient way to multiply these
matrices together.
Marked in up where I get minimum 1 matrix k value is wrriten in
other matrix
And (1,4) = 3 means between (A B C )D
The bracket is a er 3
Longest Common Sequence (LCS)
A subsequence of a given sequence is just the given sequence with some elements left
out.

Given two sequences X and Y, we say that the sequence Z is a common sequence of X
and Y if Z is a subsequence of both X and Y.

In the longest common subsequence problem, we are given two sequences X =


(x1 x2....xm) and Y = (y1 y2 yn) and wish to find a maximum length common subsequence
of X and Y. LCS Problem can be solved using dynamic programming.

What is the 0/1 knapsack problem?


The 0/1 knapsack problem means that the items are either completely or no items are
filled in a knapsack. For example, we have two items having weights 2kg and 3kg,
respectively. If we pick the 2kg item then we cannot pick 1kg item from the 2kg item
(item is not divisible); we have to pick the 2kg item completely. This is a 0/1 knapsack
problem in which either we pick the item completely or we will pick that item. The 0/1
knapsack problem is solved by the dynamic programming.

What is the fractional knapsack problem?


The fractional knapsack problem means that we can divide the item. For example, we
have an item of 3 kg then we can pick the item of 2 kg and leave the item of 1 kg. The
fractional knapsack problem is solved by the Greedy approach.

First, we write the weights in the ascending order and profits according to their weights
shown as below:

wi = {3, 4, 5, 6}

V[i,w]=max(V[i−1,w],V[i−1,w−wt[i]]+val[i])
What is tabulation?
Tabulation is a technique that is used to implement the DP algorithms. It is also known
as a bottom-up approach. It starts from solving the lowest level sub-problem. The
solution to the lowest level sub-problem will help to solve next level sub-problem, and
so forth. We solve all the sub-problems iteratively until we solve all the sub-problems.
This approach saves the time when a sub-problem needs a solution of the sub-
problem that has been solved before.

What is Memoization?
Memoization is a technique that is used to implement the DP algorithms.
Memoization is also known as a top-down approach. It starts from solving the highest-
level sub-problems. Initially, it solves the highest-level subproblem and then solve the
next sub-problem recursively and the next. Suppose there are two sub-problems, i.e.,
sub-problem A and sub-problem B. When sub-problem B is called recursively, then it
can use the solution of sub-problem A, which has already been used. Since A and all
the sub-problems are memoized, it avoids solving the entire recursion tree generated
by B and saves computation time.

Kruskal Algorithm
The Kruskal Algorithm is used to find the minimum cost of a spanning tree. A spanning
tree is a connected graph using all the vertices in which there are no loops. In other
words, we can say that there is a path from any vertex to any other vertex but no loops.

What is Minimum Cost Spanning Tree?


The minimum spanning tree is a spanning tree that has the smallest total edge weight.
The Kruskal algorithm is an algorithm that takes the graph as input and finds the edges
from the graph, which forms a tree that includes every vertex of a graph.

Working of Kruskal Algorithm


The working of the Kruskal algorithm starts from the edges, which has the lowest
weight and keeps adding the edges until we reach the goal.

The following are the steps used to implement the Kruskal algorithm:

o First, sort the edges in the ascending order of their edge weights.
o Consider the edge which is having the lowest weight and add it in the spanning tree.
If adding any edge in a spanning tree creates a cycle then reject that edge.
o Keep adding the edges until we reach the end vertex.

Prim’s algorithm:
We have discussed Kruskal’s algorithm for Minimum Spanning Tree. Like
Kruskal’s algorithm, Prim’s algorithm is also a Greedy algorithm. This
algorithm always starts with a single node and moves through several
adjacent nodes, in order to explore all of the connected edges along the
way.
The algorithm starts with an empty spanning tree. The idea is to maintain
two sets of vertices. The first set contains the vertices already included in
the MST, and the other set contains the vertices not yet included. At every
step, it considers all the edges that connect the two sets and picks the
minimum weight edge from these edges. After picking the edge, it moves
the other endpoint of the edge to the set containing MST.
A group of edges that connects two sets of vertices in a graph is called cut
in graph theory. So, at every step of Prim’s algorithm, find a cut, pick the
minimum weight edge from the cut, and include this vertex in MST Set (the
set that contains already included vertices).
How does Prim’s Algorithm Work?
The working of Prim’s algorithm can be described by using the
following steps:
Step 1: Determine an arbitrary vertex as the starting vertex of the
MST.
Step 2: Follow steps 3 to 5 till there are vertices that are not
included in the MST (known as fringe vertex).
Step 3: Find edges connecting any tree vertex with the fringe
vertices.
Step 4: Find the minimum among these edges.
Step 5: Add the chosen edge to the MST if it does not form any
cycle.
Step 6: Return the MST and exit
Greedy Algorithm
The greedy method is one of the strategies like Divide and conquer used to solve the
problems. This method is used for solving optimization problems. An optimization
problem is a problem that demands either maximum or minimum results. Let's
understand through some terms.

The Greedy method is the simplest and straightforward approach. It is not an


algorithm, but it is a technique. The main function of this approach is that the decision
is taken on the basis of the currently available information. Whatever the current
information is present, the decision is made without worrying about the effect of the
current decision in future.

This technique is basically used to determine the feasible solution that may or may not
be optimal. The feasible solution is a subset that satisfies the given criteria. The optimal
solution is the solution which is the best and the most favorable solution in the subset.
In the case of feasible, if more than one solution satisfies the given criteria then those
solutions will be considered as the feasible, whereas the optimal solution is the best
solution among all the solutions.

Fractional Knapsack Problem


The fractional knapsack problem is also one of the techniques which are used to solve
the knapsack problem. In fractional knapsack, the items are broken in order to
maximize the profit. The problem in which we break the item is known as a Fractional
knapsack problem.

This problem can be solved with the help of using two techniques:

o Brute-force approach: The brute-force approach tries all the possible solutions with all
the different fractions but it is a time-consuming approach.
o Greedy approach: In Greedy approach, we calculate the ratio of profit/weight, and
accordingly, we will select the item. The item with the highest ratio would be selected
first.

There are basically three approaches to solve the problem:

o The first approach is to select the item based on the maximum profit.
o The second approach is to select the item based on the minimum weight.
o The third approach is to calculate the ratio of profit/weight.
Huffman Codes
o (i) Data can be encoded efficiently using Huffman Codes.
o (ii) It is a widely used and beneficial technique for compressing data.
o (iii) Huffman's greedy algorithm uses a table of the frequencies of occurrences of each
character to build up an optimal way of representing each character as a binary string.

Travelling Sales Person Problem


The traveling salesman problems abide by a salesman and a set of cities. The salesman
has to visit every one of the cities starting from a certain one (e.g., the hometown) and
to return to the same city. The challenge of the problem is that the traveling salesman
needs to minimize the total length of the trip.

Dijkstra's Algorithm is a popular method for finding


the shortest paths from a single source vertex to all
other vertices in a weighted graph with non-negative
edge weights. Here's a brief explanation of Dijkstra's
Algorithm with steps:
Bellman Ford Algorithm
Bellman ford algorithm is a single-source shortest path algorithm. This algorithm is
used to find the shortest distance from the single vertex to all the other vertices of a
weighted graph. There are various other algorithms used to find the shortest path like
Dijkstra algorithm, etc. If the weighted graph contains the negative weight values, then
the Dijkstra algorithm does not confirm whether it produces the correct answer or not.
In contrast to Dijkstra algorithm, bellman ford algorithm guarantees the correct answer
even if the weighted graph contains the negative weight values.

Rule of this algorithm

1. We will go on relaxing all the edges (n - 1) times where,


2. n = number of vertices

NP: is the set of decision problems that can be verified in polynomial time.

NP-Hard: L is NP-hard if for all L' ϵ NP, L' ≤p L. Thus if we can solve L in polynomial
time, we can solve all NP problems in polynomial time.

NP-Complete L is NP-complete if

1. L ϵ NP and
2. L is NP-hard

If any NP-complete problem is solvable in polynomial time, then every NP-Complete


problem is also solvable in polynomial time. Conversely, if we can prove that any NP-
Complete problem cannot be solved in polynomial time, every NP-Complete problem
cannot be solvable in polynomial time.

You might also like