0% found this document useful (0 votes)
12 views21 pages

DAA Final

The document provides an overview of algorithms, their characteristics, and various algorithm design strategies including Divide and Conquer, Greedy, Dynamic Programming, and Backtracking. It discusses specific algorithms such as Prim's Algorithm for finding Minimum Spanning Trees, Merge Sort for sorting, and graph traversal methods like Breadth-First Search and Depth-First Search. Additionally, it covers performance analysis of algorithms in terms of space and time complexity, along with applications of the Knapsack Problem.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
12 views21 pages

DAA Final

The document provides an overview of algorithms, their characteristics, and various algorithm design strategies including Divide and Conquer, Greedy, Dynamic Programming, and Backtracking. It discusses specific algorithms such as Prim's Algorithm for finding Minimum Spanning Trees, Merge Sort for sorting, and graph traversal methods like Breadth-First Search and Depth-First Search. Additionally, it covers performance analysis of algorithms in terms of space and time complexity, along with applications of the Knapsack Problem.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 21

1 .What is an algorithm? Explain the different characteristics of algorithm.

Explain
the different areas
Algorithm is a step-by-step procedure, which defines a set of instructions to be executed
in a certain order to get the desired output. Algorithms are generally created independent
of underlying languages, i.e. an algorithm can be implemented in more than one
programming language.
From the data structure point of view, following are some important categories of
algorithms −
 Search − Algorithm to search an item in a data structure.
 Sort − Algorithm to sort items in a certain order.
 Insert − Algorithm to insert item in a data structure.
 Update − Algorithm to update an existing item in a data structure.
 Delete − Algorithm to delete an existing item from a data structure.

Characteristics of an Algorithm

Not all procedures can be called an algorithm. An algorithm should have the following
characteristics −
 Unambiguous − Algorithm should be clear and unambiguous. Each of its steps (or
phases), and their inputs/outputs should be clear and must lead to only one
meaning.
 Input − An algorithm should have 0 or more well-defined inputs.
 Output − An algorithm should have 1 or more well-defined outputs, and should
match the desired output.
 Finiteness − Algorithms must terminate after a finite number of steps.
 Feasibility − Should be feasible with the available resources.
 Independent − An algorithm should have step-by-step directions, which should be
independent of any programming code.
2. Prim's Algorithm

In this article, we will discuss the prim's algorithm. Along with the algorithm, we will also
see the complexity, working, example, and implementation of prim's algorithm.

Before starting the main topic, we should discuss the basic and important terms such as
spanning tree and minimum spanning tree.

Spanning tree - A spanning tree is the subgraph of an undirected connected graph.

Minimum Spanning tree - Minimum spanning tree can be defined as the spanning tree
in which the sum of the weights of the edge is minimum. The weight of the spanning tree
is the sum of the weights given to the edges of the spanning tree.

Now, let's start the main topic.

Prim's Algorithm is a greedy algorithm that is used to find the minimum spanning tree
from a graph. Prim's algorithm finds the subset of edges that includes every vertex of the
graph such that the sum of the weights of the edges can be minimized.

Prim's algorithm starts with the single node and explores all the adjacent nodes with all
the connecting edges at every step. The edges with the minimal weights causing no
cycles in the graph got selected.

.
3. Merge Sort Algorithm
Merge Sort follows the rule of Divide and Conquer to sort a given set of
numbers/elements, recursively, hence consuming less time.

Before moving forward with Merge Sort, check these topics out first:

 Selection Sort
 Insertion Sort
 Space Complexity of Algorithms
 Time Complexity of Algorithms

In the last two tutorials, we learned about Selection Sort and Insertion Sort, both of
which have a worst-case running time of O(n2). As the size of input grows, insertion and
selection sort can take a long time to run.

Merge sort , on the other hand, runs in O(n*log n) time in all the cases.

In Merge Sort, the given unsorted array with n elements, is divided into n subarrays,
each having one element, because a single element is always sorted in itself. Then, it
repeatedly merges these subarrays, to produce new sorted subarrays, and in the end,
one complete sorted array is produced.

The concept of Divide and Conquer involves three steps:

1. Divide the problem into multiple small problems.

2. Conquer the subproblems by solving them. The idea is to break down the
problem into atomic subproblems, where they are actually solved.

3. Combine the solutions of the subproblems to find the solution of the actual
problem.
4. Elaborate various algorithm design strategies.

. Divide and Conquer Approach: It is a top-down approach. The algorithms which follow
the divide & conquer techniques involve three steps:

o Divide the original problem into a set of subproblems.


o Solve every subproblem individually, recursively.
o Combine the solution of the subproblems (top level) into a solution of the whole
original problem.

. Greedy Technique: Greedy method is used to solve the optimization problem. An


optimization problem is one in which we are given a set of input values, which are required
either to be maximized or minimized (known as objective), i.e. some constraints or
conditions.

o Greedy Algorithm always makes the choice (greedy criteria) looks best at the
moment, to optimize a given objective.
o The greedy algorithm doesn't always guarantee the optimal solution however it
generally produces a solution that is very close in value to the optimal.

. Dynamic Programming: Dynamic Programming is a bottom-up approach we solve all


possible small problems and then combine them to obtain solutions for bigger problems.

This is particularly helpful when the number of copying subproblems is exponentially


large. Dynamic Programming is frequently related to Optimization Problems.

. Branch and Bound: In Branch & Bound algorithm a given subproblem, which cannot
be bounded, has to be divided into at least two new restricted subproblems. Branch and
Bound algorithm are methods for global optimization in non-convex problems. Branch and
Bound algorithms can be slow, however in the worst case they require effort that grows
exponentially with problem size, but in some cases we are lucky, and the method
coverage with much less effort.

. Randomized Algorithms: A randomized algorithm is defined as an algorithm that is


allowed to access a source of independent, unbiased random bits, and it is then allowed
to use these random bits to influence its computation.

. Backtracking Algorithm: Backtracking Algorithm tries each possibility until they find
the right one. It is a depth-first search of the set of possible solution. During the search, if
an alternative doesn't work, then backtrack to the choice point, the place which presented
different alternatives, and tries the next alternative.
5. Greedy Algorithm

The greedy method is one of the strategies like Divide and conquer used to solve the
problems. This method is used for solving optimization problems. An optimization problem
is a problem that demands either maximum or minimum results. Let's understand through
some terms.

The Greedy method is the simplest and straightforward approach. It is not an algorithm,
but it is a technique. The main function of this approach is that the decision is taken on
the basis of the currently available information. Whatever the current information is
present, the decision is made without worrying about the effect of the current decision in
future.

This technique is basically used to determine the feasible solution that may or may not
be optimal. The feasible solution is a subset that satisfies the given criteria. The optimal
solution is the solution which is the best and the most favorable solution in the subset. In
the case of feasible, if more than one solution satisfies the given criteria then those
solutions will be considered as the feasible, whereas the optimal solution is the best
solution among all the solutions.

Characteristics of Greedy method


o To construct the solution in an optimal way, this algorithm creates two sets where
one set contains all the chosen items, and another set contains the rejected items.
o A Greedy algorithm makes good local choices in the hope that the solution should
be either feasible or optimal.

Components of Greedy Algorithm


o Candidate set: A solution that is created from the set is known as a candidate set.
o Selection function: This function is used to choose the candidate or subset which
can be added in the solution.
o Feasibility function: A function that is used to determine whether the candidate
or subset can be used to contribute to the solution or not.
o Objective function: A function is used to assign the value to the solution or the
partial solution.
o Solution function: This function is used to intimate whether the complete function
has been reached or not.
Applications of Greedy Algorithm
o It is used in finding the shortest path.
o It is used to find the minimum spanning tree using the prim's algorithm or the
Kruskal's algorithm.
o It is used in a job sequencing with a deadline.
o This algorithm is also used to solve the fractional knapsack problem.

6. Backtracking
is a technique based on algorithm to solve problem. It uses recursive calling to find the
solution by building a solution step by step increasing values with time. It removes the
solutions that doesn't give rise to the solution of the problem based on the constraints
given to solve the problem.
Backtracking algorithm is applied to some specific types of problems,
 Decision problem used to find a feasible solution of the problem.
 Optimization problem used to find the best solution that can be applied.
 Enumeration problem used to find the set of all feasible solutions of the problem.
 In backtracking problem, the algorithm tries to find a sequence path to the solution
which has some small checkpoints from where the problem can backtrack if no
feasible solution is found for the problem.
 Example,


 Here,
 Green is the start point, blue is the intermediate point, red are points with no
feasible solution, dark green is end solution.
 Here, when the algorithm propagates to an end to check if it is a solution or not, if
it is then returns the solution otherwise backtracks to the point one step behind it
to find track to the next point to find solution.
7. Algorithm performance analysis

Performance analysis of an algorithm depends upon two factors i.e. amount of memory
used and amount of compute time consumed on any CPU. Formally they are notified as
complexities in terms of:
 Space Complexity.
 Time Complexity.

Space Complexity of an algorithm is the amount of memory it needs to run to


completion i.e. from start of execution to its termination. Space need by any algorithm is
the sum of following components:
1. Fixed Component: This is independent of the characteristics of the inputs and outputs.
This part includes: Instruction Space, Space of simple variables, fixed size component
variables, and constants variables.
2. Variable Component: This consist of the space needed by component variables whose
size is dependent on the particular problems instances(Inputs/Outputs) being solved,
the space needed by referenced variables and the recursion stack space is one of the
most prominent components. Also this included the data structure components like
Linked list, heap, trees, graphs etc.

Therefore the total space requirement of any algorithm 'A' can be provided as

Space(A) = Fixed Components(A) + Variable Components(A)

Among both fixed and variable component the variable part is important to be
determined accurately, so that the actual space requirement can be identified for an
algorithm 'A'. To identify the space complexity of any algorithm following steps can be
followed:
1. Determine the variables which are instantiated by some default values.
2. Determine which instance characteristics should be used to measure the space
requirement and this is will be problem specific.
3. Generally the choices are limited to quantities related to the number and magnitudes of
the inputs to and outputs from the algorithms.
4. Sometimes more complex measures of the interrelationships among the data items can
used
8. Knapsack Problem

Given a set of items, each with a weight and a value, determine a subset of items to
include in a collection so that the total weight is less than or equal to a given limit and the
total value is as large as possible.
The knapsack problem is in combinatorial optimization problem. It appears as a
subproblem in many, more complex mathematical models of real-world problems. One
general approach to difficult problems is to identify the most restrictive constraint, ignore
the others, solve a knapsack problem, and somehow adjust the solution to satisfy the
ignored constraints.
Applications
In many cases of resource allocation along with some constraint, the problem can be
derived in a similar way of Knapsack problem. Following is a set of example.

 Finding the least wasteful way to cut raw materials


 portfolio optimization
 Cutting stock problems
9. Explain with alogrithm the breadth-first search and depth-first search graph
traversal methods.
In this section we will see what is a graph data structure, and the traversal algorithms of
it.
The graph is one non-linear data structure. That is consists of some nodes and their
connected edges. The edges may be director or undirected. This graph can be
represented as G(V, E). The following graph can be represented as G({A, B, C, D, E},
{(A, B), (B, D), (D, E), (B, C), (C, A)})

The graph has two types of traversal algorithms. These are called the Breadth First
Search and Depth First Search.

Breadth First Search (BFS)

The Breadth First Search (BFS) traversal is an algorithm, which is used to visit all of the
nodes of a given graph. In this traversal algorithm one node is selected and then all of
the adjacent nodes are visited one by one. After completing all of the adjacent vertices,
it moves further to check another vertices and checks its adjacent vertices again.

Algorithm

bfs(vertices, start)
Input: The list of vertices, and the start vertex.
Output: Traverse all of the nodes, if the graph is connected.
Begin
define an empty queue que
at first mark all nodes status as unvisited
add the start vertex into the que
while que is not empty, do
delete item from que and set to u
display the vertex u
for all vertices 1 adjacent with u, do
if vertices[i] is unvisited, then
mark vertices[i] as temporarily visited
add v into the queue
mark
done
mark u as completely visited
done
End

Depth First Search (DFS)

The Depth First Search (DFS) is a graph traversal algorithm. In this algorithm one starting
vertex is given, and when an adjacent vertex is found, it moves to that adjacent vertex
first and try to traverse in the same manner.

Algorithm

dfs(vertices, start)
Input: The list of all vertices, and the start node.
Output: Traverse all nodes in the graph.
Begin
initially make the state to unvisited for all nodes
push start into the stack
while stack is not empty, do
pop element from stack and set to u
display the node u
if u is not visited, then
mark u as visited
for all nodes i connected to u, do
if ith vertex is unvisited, then
push ith vertex into the stack
mark ith vertex as visited
done
done
End
8. Minimum Spanning Tree

A Minimum Spanning Tree (MST) is a subset of edges of a connected weighted


undirected graph that connects all the vertices together with the minimum possible total
edge weight. To derive an MST, Prim’s algorithm or Kruskal’s algorithm can be used.
Hence, we will discuss Prim’s algorithm in this chapter.
As we have discussed, one graph may have more than one spanning tree. If there
are n number of vertices, the spanning tree should have n - 1 number of edges. In this
context, if each edge of the graph is associated with a weight and there exists more than
one spanning tree, we need to find the minimum spanning tree of the graph.
Moreover, if there exist any duplicate weighted edges, the graph may have multiple
minimum spanning tree.

In the above graph, we have shown a spanning tree though it’s not the minimum spanning
tree. The cost of this spanning tree is (5 + 7 + 3 + 3 + 5 + 8 + 3 + 4) = 38.
We will use Prim’s algorithm to find the minimum spanning tree.

Prim’s algorithm is a greedy approach to find the minimum spanning tree. In this
algorithm, to form a MST we can start from an arbitrary vertex.

Algorithm: MST-Prim’s (G, w, r)


for each u є G.V
u.key = ∞
u.∏ = NIL
r.key = 0
Q = G.V
while Q ≠ Ф
u = Extract-Min (Q)
for each v є G.adj[u]
if each v є Q and w(u, v) < v.key
v.∏ = u
v.key = w(u, v)
The function Extract-Min returns the vertex with minimum edge cost. This function works
on min-heap.
Example
Using Prim’s algorithm, we can start from any vertex, let us start from vertex 1.
Vertex 3 is connected to vertex 1 with minimum edge cost, hence edge (1, 2) is added to
the spanning tree.
Next, edge (2, 3) is considered as this is the minimum among edges {(1, 2), (2, 3), (3, 4),
(3, 7)}.
In the next step, we get edge (3, 4) and (2, 4) with minimum cost. Edge (3, 4) is selected
at random.
In a similar way, edges (4, 5), (5, 7), (7, 8), (6, 8) and (6, 9) are selected. As all the vertices
are visited, now the algorithm stops.
The cost of the spanning tree is (2 + 2 + 3 + 2 + 5 + 2 + 3 + 4) = 23. There is no more
spanning tree in this graph with cost less than 23.
9. Bi connected components and DFS.
A biconnected component is a maximal biconnected subgraph.
Biconnected Graph is already discussed here. In this article, we will see how to
find biconnected component in a graph using algorithm by John Hopcroft and Robert
Tarjan.

In above graph, following are the biconnected components:

4–2 3–4 3–1 2–3 1–2


8–9
8–5 7–8 5–7
6–0 5–6 1–5 0–1
10–11
Algorithm is based on Disc and Low Values discussed in Strongly Connected
Components Article.
Idea is to store visited edges in a stack while DFS on a graph and keep looking
for Articulation Points (highlighted in above figure). As soon as an Articulation Point u is
found, all edges visited while DFS from node u onwards will form one biconnected
component. When DFS completes for one connected component, all edges present in
stack will form a biconnected component.
If there is no Articulation Point in graph, then graph is biconnected and so there will be
one biconnected component which is the graph itself.
10. Hamilton Circuit Problem:-
Definition: A Hamiltonian cycle is a cycle that contains all vertices in a graph . If a
graph has a Hamiltonian cycle, then the graph is said to be Hamiltonian.A Hamiltonian
cycle, also called a Hamiltonian circuit, Hamilton cycle, or Hamilton circuit, is a graph
cycle (i.e., closed loop) through a graph that visits each node exactly once .
A graph possessing a Hamiltonian cycle is said to be a Hamiltonian graph.The
Hamiltonian cycle problem is a special case of the travelling salesman problem, obtained
by setting the distance between two cities to one if they are adjacent and two otherwise,
and verifying that the total distance travelled is equal to n (if so, the route is a
Hamiltonian circuit; if there is no Hamiltonian circuit then the shortest route will be
longer).

11. Write notes on algorithm analysis.


Algorithm analysis is an important part of computational complexity theory, which
provides theoretical estimation for the required resources of an algorithm to solve a
specific computational problem. Most algorithms are designed to work with inputs of
arbitrary length. Analysis of algorithms is the determination of the amount of time and
space resources required to execute it.
Usually, the efficiency or running time of an algorithm is stated as a function relating the
input length to the number of steps, known as time complexity, or volume of memory,
known as space complexity.

12. What Is Time Complexity?


Time complexity is a type of computational complexity that describes the time required
to execute an algorithm. The time complexity of an algorithm is the amount of time it
takes for each statement to complete. As a result, it is highly dependent on the size of
the processed data. It also aids in defining an algorithm's effectiveness and evaluating
its performance.

13. What Is Space Complexity?

When an algorithm is run on a computer, it necessitates a certain amount of memory


space. The amount of memory used by a program to execute it is represented by its
space complexity. Because a program requires memory to store input data and
temporal values while running, the space complexity is auxiliary and input space.
14. Worst Case, Average Case, and Best Case
Worst Case Analysis:
In the worst-case analysis, we calculate the upper limit of the execution time of an
algorithm. It is necessary to know the case which causes the execution of the maximum
number of operations.

For linear search, the worst case occurs when the element to search for is not present
in the array. When x is not present, the search () function compares it with all the
elements of arr [] one by one. Therefore, the temporal complexity of the worst case of
linear search would be Θ (n).

Average Case Analysis:


In the average case analysis, we take all possible inputs and calculate the computation
time for all inputs. Add up all the calculated values and divide the sum by the total
number of entries.

We need to predict the distribution of cases. For the linear search problem, assume that
all cases are uniformly distributed. So we add all the cases and divide the sum by (n +
1).

Best Case Analysis:


In the best case analysis, we calculate the lower bound of the execution time of an
algorithm. It is necessary to know the case which causes the execution of the minimum
number of operations. In the linear search problem, the best case occurs when x is
present at the first location.

The number of operations in the best case is constant. The best-case time complexity
would therefore be Θ (1) Most of the time, we perform worst-case analysis to analyze
algorithms. In the worst analysis, we guarantee an upper bound on the execution time
of an algorithm which is good information.

The average case analysis is not easy to do in most practical cases and is rarely done.
In the average case analysis, we need to predict the mathematical distribution of all
possible inputs. The Best Case analysis is wrong. Guaranteeing a lower bound on an
algorithm does not provide any information because in the Worst Case scenario an
algorithm can take years to run.
13. State the greedy method.

A greedy Algorithm solves problems by making the choice that seems to be the best at
that particular moment. There are many optimization problems that can be determined
using a greedy algorithm. A greedy algorithm may provide a solution that is close to
optimal to some issues that might have no efficient solution. A greedy algorithm works if
a problem has the following two properties:

Greedy Choice Property: By creating a locally optimal solution we can reach a globally
optimal solution. In other words, by making “greedy” choices we can obtain an optimal
solution. Optimal substructure: Optimal solutions will always contain optimal
subsolutions. In other words, we say that the answers to subproblems of an optimal
solution are optimal.

Examples:

Following are a few examples of Greedy algorithms

Machine scheduling

Fractional Knapsack Problem

Minimum Spanning Tree

Huffman Code

Job Sequencing

Activity Selection Problem

Components of Greedy Algorithm

Greedy algorithms consist of the following five components −

A candidate set − A solution is created with the help of a candidate set.

A selection function − It is used to choose the best candidate that is to be added to the
solution.
A feasibility function − A feasibility function is useful in determining whether a candidate
can be used to contribute to the solution or not.
An objective function − This is used to assign a value to a solution or a partial solution.
A solution function − A solution function is used to indicate whether a complete solution
has been reached or not.
14. Differentiate between the subset paradigm and ordering paradiam

Subset Paradigm: To solve a problem (or possibly find the optimal/best solution),
greedy approach generate subset by selecting one or more available choices. Eg.
includes Knapsack problem, job sequencing with deadlines. In both of the problems
greedy create a subset of items or jobs which satisfies all the constraints.

Ordering Paradigm: In this, greedy approach generate some arrangement/order to get


the best solution. Eg. includes: Minimum Spanning tree (Prim’s and Karuskal’s method)

15. Merge Sort

Merge sort is yet another sorting algorithm that falls under the category of Divide and
Conquer technique. It is one of the best sorting techniques that successfully build a
recursive algorithm.

16. Quick sort

It is an algorithm of Divide & Conquer type.

Divide: Rearrange the elements and split arrays into two sub-arrays and an element in
between search that each element in left sub array is less than or equal to the average
element and each element in the right sub- array is larger than the middle element.

Conquer: Recursively, sort two sub arrays.

Combine: Combine the already sorted array.

17. Binary Search

1. In Binary Search technique, we search an element in a sorted array by recursively


dividing the interval in half.

2. Firstly, we take the whole array as an interval.

3. If the Pivot Element (the item to be searched) is less than the item in the middle of the
interval, We discard the second half of the list and recursively repeat the process for the
first half of the list by calculating the new middle and last element.

4. If the Pivot Element (the item to be searched) is greater than the item in the middle of
the interval, we discard the first half of the list and work recursively on the second half by
calculating the new beginning and middle element.

5. Repeatedly, check until the value is found or interval is empty.


18. strassen’s Matrix Multiplication Algorithm

In this context, using Strassen’s Matrix multiplication algorithm, the time consumption can
be improved a little bit.
Strassen’s Matrix multiplication can be performed only on square matrices where n is
a power of 2. Order of both of the matrices are n × n.
Divide X, Y and Z into four (n/2)×(n/2) matrices as represented below −
Z=[IKJL]Z=[IJKL] X=[ACBD]X=[ABCD] and Y=[EGFH]Y=[EFGH]
Using Strassen’s Algorithm compute the following −
M1:=(A+C)×(E+F)M1:=(A+C)×(E+F)
M2:=(B+D)×(G+H)M2:=(B+D)×(G+H)
M3:=(A−D)×(E+H)M3:=(A−D)×(E+H)
M4:=A×(F−H)M4:=A×(F−H)
M5:=(C+D)×(E)M5:=(C+D)×(E)
M6:=(A+B)×(H)M6:=(A+B)×(H)
M7:=D×(G−E)M7:=D×(G−E)
Then,
I:=M2+M3−M6−M7I:=M2+M3−M6−M7
J:=M4+M6J:=M4+M6
K:=M5+M7K:=M5+M7
L:=M1−M3−M4−M5L:=M1−M3−M4−M5
Analysis

T(n)={c7xT(n2)+dxn2ifn=1otherwiseT(n)={cifn=17xT(n2)+dxn2otherwise where c and d


are constants
Using this recurrence relation, we get T(n)=O(nlog7)T(n)=O(nlog7)
Hence, the complexity of Strassen’s matrix multiplication algorithm is O(nlog7)O(nlog7).
19. Algorithm
An algorithm is a set of steps of operations to solve a problem performing calculation,
data processing, and automated reasoning tasks. An algorithm is an efficient method
that can be expressed within finite amount of time and space. An algorithm is the best
way to represent the solution of a particular problem in a very simple and efficient way.
If we have an algorithm for a specific problem, then we can implement it in any
programming language, meaning that the algorithm is independent from any
programming languages.
20. Asymptotic notations
Asymptotic notations are the mathematical notations used to describe the running time
of an algorithm when the input tends towards a particular value or a limiting value.
For example: In bubble sort, when the input array is already sorted, the time taken by
the algorithm is linear i.e. the best case.
21. Space Tree
A space state tree is a tree representing all the possible states (solution or non solution)
of the problem from the root as an initial state to the leaf as a terminal state.
22. What is meant by knapsack problem in DAA?
The knapsack problem is a problem in combinatorial optimization: Given a set of items,
each with a weight and a value, determine the number of each item to include in a
collection so that the total weight is less than or equal to a given limit and the total value
is as large as possible.
23. What is biconnected graph in DAA?
A biconnected undirected graph is a connected graph that is not broken into
disconnected pieces by deleting any single vertex (and its incident edges). A
biconnected directed graph is one such that for any two vertices v and w there are two
directed paths from v to w which have no vertices in common other than v and w.
24. Kruskal's Algorithm

Kruskal's Algorithm is used to find the minimum spanning tree for a connected weighted
graph. The main target of the algorithm is to find the subset of edges by using which we
can traverse every vertex of the graph. It follows the greedy approach that finds an
optimum solution at every stage instead of focusing on a global optimum.

25. Graph coloring

It is a process or procedure of assigning colors to each corner or vertex in a


particular graph in such a way that no particular adjacent vertices or corners get
the same color.
It’s main objective is to reduce the amount of colors or number of colors while
coloring in a given graph.
Vertex coloring is used efficiently and therefore is used more often. There are
other graph coloring methods like Edge Coloring and Face Coloring which can be
transformed into a vertex coloring method.

26.Traveling salesman problem (TSP)

The traveling salesman problem (TSP) is an algorithmic problem tasked with finding the
shortest route between a set of points and locations that must be visited. In the problem
statement, the points are the cities a salesperson might visit. The salesman‘s goal is to
keep both the travel costs and the distance traveled as low as possible.

Focused on optimization, TSP is often used in computer science to find the most
efficient route for data to travel between various nodes. Applications include identifying
network or hardware optimization methods. The creation of a game that was solvable
by finding a Hamilton cycle, which is a non-overlapping path between all nodes.

TSP has been studied for decades and several solutions have been theorized. The
simplest solution is to try all possibilities, but this is also the most time consuming and
expensive method. Many solutions use heuristics, which provides probability outcomes.
However, the results are approximate and not always optimal. Other solutions include
branch and bound, Monte Carlo and Las Vegas algorithms.

Rather than focus on finding the most effective route, TSP is often concerned with
finding the cheapest solution. In TSPs, the large amount of variables creates a
challenge when finding the shortest route, which makes approximate, fast and cheap
solutions all the more attractive.

You might also like