0% found this document useful (0 votes)
20 views14 pages

Adarshdaa

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
20 views14 pages

Adarshdaa

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 14

Index

S. No. Objective Page no

1) Write a Program for Recursive Binary & Linear Search.


2) Write a Program for HeapSort.
3) Write a Program for MergeSort.
4)
Write a Program for Selection Sort.
5) Write a Program for Insertion Sort.
6) Write a Program for QuickSort.
7) Knapsack Problem using Greedy Solution.
8) Perform Travelling Salesman Problem.
9) Write a Program for finding the Minimum Spanning Tree
using
Kruskal’s Algorithm.
10) Implement N Queen Problem using Backtracking.

Adarsh Pal 2204310130001


PRACTICAL#1

OBJECTIVE: Program for Recursive Binary & Linear Search.

Concept and theory:


Binary search is more specialized algorithm than sequential search as it takes advantage of
data that has been sorted. The underlying idea of binary search is to divide the sorted data into
two halves and to examine the data at the point of the split. Since the data is sorted, we can
easily ignore one half or the other depending on where the data we’ re looking for lies in
comparison to the data at the split. This makes for a much more efficient search than linear
search.
Binary search is one of the most common search algorithms and is useful inmost any real
world application you might write.

Algorithm-

BinarySearch(A[0..N-1], value) {
low = 0
high = N - 1
while (low <= high) {
mid = (low + high) / 2
if (A[mid] > value)
high = mid - 1
else if (A[mid] < value)
low = mid + 1
else
return mid
}
return not_found
}

Complexity:
Huge advantage of this algorithm is that its complexity depends on the array size
logarithmically in worst case. In practice it means, that algorithm will do at most log 2(n)
iterations, which is a very small number even for big arrays. It can be proved very easily.
Indeed, on every step the size of the searched part is reduced by half. Algorithm stops, when
there are no elements to search in. The binary search algorithm time complexity is O(log2(n)).

Lab Assignment:

Write a program to search an element in sorted array using binary search algorithm and linear
search algorithm.

Result:
In Put: 2,3,6,1,4,5,7,9.

Out Put: 1,2,3,4,5,6,7,9.

Adarsh Pal 2204310130001


PRACTICAL#2

OBJECTIVE: Program for Heap Sort.

Concept and Theory:

The binary heap data structures is an array that be viewed as a complete binary tree. Each
node of the binary tree corresponds to element of the array. The array is completely filled on
all levels except possibly lowest.

The root of the A[1] and given index i of a node, the indices of its parent, left child and right
chilled can be computed
PARENT (i)
Return floor (i/2.)
LEFT (i)
return 2i
RIGHT (i)
return 2i + 1

Heap Property:

Ina heap, for every node i other than the root, the value of a node is greater than or equal ( at
most ) to the value of its parent.
A[PARENT (i) ] ≥A [ i]

Thus, the largest element in a heap is stored at the root.

Algorithm-

void heapSort(int numbers[], int array_size)


{
int i, temp;

for (i = (array_size / 2)-1; i >= 0; i--)


siftDown(numbers, i, array_size);

for (i = array_size-1; i >= 1; i--)


{
temp = numbers[0];
numbers[0] = numbers[i];
numbers[i] = temp;
siftDown(numbers, 0, i-1);
}
}

Adarsh Pal 2204310130001


void siftDown(int numbers[], int root, int bottom)
{
int done, maxChild, temp;

done = 0;
while ((root*2 <= bottom) && (!done))
{
if (root*2 == bottom)
maxChild = root * 2;
else if (numbers[root * 2] > numbers[root * 2 + 1])
maxChild = root * 2;
else
maxChild = root * 2 + 1;

if (numbers[root] < numbers[maxChild])


{
temp = numbers[root];
numbers[root] = numbers[maxChild];
numbers[maxChild] = temp;
root = maxChild;
}
else
done = 1;
}
}

Complexity:

max_heapify has complexity O(logN), build_maxheap has complexity O(N) and we run
max_heapify N−1times in heap sort function, therefore complexity of heap sort function
is O(N logN).

Lab Assignment:

Illustrate the operation of heap sort on A = < 54,68,12,49,31,27,59,13,10 >

Result:

In Put: 54,68,12,49,31,27,59,13,10

Out Put: 10,12,13,27,31,49,54,59,68.

Adarsh Pal 2204310130001


PRACTICAL#3

OBJECTIVE: Program for Merge Sort.

Concept and theory: The Merge Sort algorithm closely follows the divide and conquer
paradigm. The key operation of the Merge Sort algorithm is the merging of the two sorted
sequences in the “combine” step. To perform the merging, we use an auxiliary procedure
MERGE (A, p, q, r), where A is an array, p, q, r are indices numbering elements of the array
such that p<q<r. It merger them to form a single sorted sub array that replaces the current sub
array A [p….r]. Although the pseudo code for Merge sort works correctly when the number
of the elements is not even, our recurrence based analysis is simplified if we assumethat the
original problem size is a power of 2. Each divide step then yields two subsequent ofsize
exactly n/2.

Algorithm-

void mergeSort(int numbers[], int temp[], int array_size)


{
m_sort(numbers, temp, 0, array_size - 1);
}

void m_sort(int numbers[], int temp[], int left, int right)


{
int mid;

if (right > left)


{
mid = (right + left) / 2;
m_sort(numbers, temp, left, mid);
m_sort(numbers, temp, mid+1, right);

merge(numbers, temp, left, mid+1, right);


}
}

void merge(int numbers[], int temp[], int left, int mid, int right)
{
int i, left_end, num_elements, tmp_pos;

left_end = mid - 1;
tmp_pos = left;
num_elements = right - left + 1;

while ((left <= left_end) && (mid <= right))


{
if (numbers[left] <= numbers[mid])
{
temp[tmp_pos] = numbers[left];
tmp_pos = tmp_pos + 1;
left = left +1;
}

Adarsh Pal 2204310130001


else

{
temp[tmp_pos] = numbers[mid];
tmp_pos = tmp_pos + 1;
mid = mid + 1;
}
}

while (left <= left_end)


{
temp[tmp_pos] = numbers[left];
left = left + 1;
tmp_pos = tmp_pos + 1;
}
while (mid <= right)
{
temp[tmp_pos] = numbers[mid];
mid = mid + 1;
tmp_pos = tmp_pos + 1;
}

for (i=0; i <= num_elements; i++)


{
numbers[right] = temp[right];
right = right - 1;
}
}

Complexity:
All cases have same efficiency: Θ( n log n)
Number of comparisons is close to theoretical minimum for comparison-based sorting: log n
! ≈ n lg n - 1.44 n

Lab Assignment:
Illustrate the operation of merge sort on the array A = < 3, 51, 17, 12, 9, 42, 3, 47 >.

Result:

In Put: 3, 51, 17, 12, 9, 42, 3, 47

Out Put: 3,3,9, 12,`17, 42,47,51.

Adarsh Pal 2204310130001


PRACTICAL #4
vi
OBJECTIVE: Program for Selection Sort.
To implement a program for selection sort

Selection Sort:
One of the simplest techniques is a selection sort. As the name suggests, selection sort is the
selection of an element and keeping it in sorted order. In selection sort, the strategy is to find
the smallest number in the array and exchange it with the value in first position of array.
Now, find the second smallest element in the remainder of array and exchange it with a value
in the second position, carry on till you have reached the end of array. Now all the elements
have been sorted in ascending order.

Algorithm:

Let ARR is an array having N elements


1. Read ARR
2. Repeat step 3 to 6 for I=0 to N-1
3. Set MIN=ARR[I] and Set LOC=I
4. Repeat step 5 for J=I+1 to N
5. If MIN>ARR[J], then
(a) Set MIN=ARR[J]
(b) Set LOC=J
[End of if]
[End of step 4 loop]
6. Interchange ARR[I] and ARR[LOC] using temporary variable
[End of step 2 outer
loop]
7. Exit

COMPLEXITY:
The time complexity for selection sort program for both worst case and average
case is O (n2) because the number of comparisons for both cases is same.

Result:

In Put: 3, 51, 17, 12, 9, 42, 3, 47

Out Put: 3,3,9, 12,`17, 42,47,51

Adarsh Pal 2204310130001


PRACTICAL#5
OBJECTIVE: Program for Insertion Sort.

Concept and theory:


Insertion sort, which is an efficient algorithm for sorting a small number of elements.
Insertion sort works the way many people sort a hand of playing cards. We start with an
empty left hand and the cards face down on the table. To find the correct position for a card,
from right to left.
Our pseudo code for insertion sort is presented as aprocedure called Insertion Sort,
which takes as parameter an array A[ A… n ]
Containing a sequence of length n that is to be sorted. The input numbers are sorted in place:in
numbers are rearranged within the array A, with at most a constant number of them stored
outside the array at time. The input array A contains the sorted output sequence when Insertion
Sort is finished.

Algorithm-
void insertionSort(int numbers[], int array_size)
{
int i, j, index;

for (i=1; i < array_size; i++)


{
index = numbers[i];
j = i;
while ((j > 0) && (numbers[j-1] > index))
{
numbers[j] = numbers[j-1];
j = j - 1;
}
numbers[j] = index;
}
}

Lab Assignment:
< 5,2,4,6,1,3 > Sort this array using an Insertion Sort.

Result:

In Put: 5,2,4,6,1,3

Out Put: 1,2,3,4,5,6.

Adarsh Pal 2204310130001


PRACTICAL #6

OBJECTIVE: Program for Quick Sort.


Program to implement a quick sort

Quick Sort divides the array according to the value of elements. It rearranges elements of a
given array A[0..n-1] to achieve its partition, where the elements before position s are smaller
than or equal to A[s] and all the elements after position s are greater than or equal to A[s].

Algorithm :
QUICKSORT(a[l..r]) //Sorts a
subarray by quicksort
//Input: A subarray A[l..r] of A[0..n-1],defined by its left and right
indices l and r //Output: Subarray A[l..r] sorted in nondecreasing order
{
if l<r
{
s← Partition(A[l..r]) //s is a
split position
QUICKSORT(A[l..s-1])
QUICKSORT(A[s+1..r])
}
}

Algorithm : Partition(A[l..r])
//Partition a subarray by using its first element as its pivot
//Input:A subarray A[l..r] of A[0..n-1],defined by its left and right indices l and r (l<r)
//Output:A partition of A[l..r],with the split position returned as this function‟s value
{
p ← A[l]i
← l; j ←
r+1 repeat
{ repeat i ← i+1 until A[i] >=p
repeat j ← j-1 until
A[j] <=p swap(A[i],A[j])
} until i>=j
swap(A[i],A[j]) // undo last swap
when i>=j swap(A[l],A[j])
return j
}

Result:

In Put: 3, 51, 17, 12, 9, 42, 3, 47

Out Put: 3,3,9, 12,`17, 42,47,51.

Adarsh Pal 2204310130001


PRACTICAL#7

OBJECTIVE: Knapsack Problem using Greedy Solution.

Concept and Theory:


The Knapsack Problem is a problem in combinatorial optimization. It derives its name from
the maximization problem essentials that can fit into one bag (of maximum weight) to be carried
on a trip. A similar problem very often appears in business, combinatorics, complexitytheory,
cryptography and applied mathematics. Given a set of items, each with a cost is less and a value,
then determine the number of each item to include in a collection so that the total cost is less
than some given cost and the total value is as large as possible.
Algorithm-

DP-01K(v, w, n, W)
1 for w = 0 to W
2 c[0,w] = 0
3 for i = 1 to n
4 c[i,0] = 0
5 for w = 1 to W

6 if w[i] w
7 then if v[i] + c[i-1,w-w[i]] > c[i-1,w]
8 then c[i,w] = v[i] + c[i-1,w-w[i]]
9 else c[i,w] = c[i-1,w]
10 else c[i,w] = c[i-1,w]

Lab Assignment:
(a) Write a program to solve o/l knapsack problem.

Adarsh Pal 2204310130001


PRACTICAL#8

OBJECTIVE: Perform Travelling Salesman Problem.

Implement any scheme to find the optimal solution for the Traveling Salesperson problem
and then solve the same problem instance using any approximation algorithm and determine
the error in the approximation.

Traveling Salesperson problem:

Given n cities, a salesperson starts at a specified city (source), visit all n-1 cities only once
and return to the city from where he has started. The objective of this problem is to find a
route through the cities that minimizes the cost and thereby maximizing the profit.

ALGORITHM:
TSP (start city, current city, next city, path)
//Purpose: To find the solution for TSP problem using exhaustive
search //Input: The start city, current city, next city and the path
//Output: The minimum distance covered along with the path

Step 1: Check for the disconnection between the current city and the next
city
Step 2: Check whether the travelling sales person has visited all the cities
Step 3: Find the next city to be visited
Step 4: Find the solution and terminate

Adarsh Pal 2204310130001


PRACTICAL#9

OBJECTIVE: Find Minimum Spanning Tree using Kruskal’s Algorithm


Kruskal's Algorithm for computing the minimum spanning tree is directly based on the generic
MST algorithm. It builds the MST in forest. Prim's algorithm is based on a generic MST
algorithm. It uses greedy approach.

Algorithm:
Start with an empty set A, and select at every stage the shortest edge that has not been chosen
or rejected, regardless of where this edge is situated in graph.
• Initially, each vertex is in its own tree in forest.
• Then, algorithm considers each edge in turn, order by increasing weight.
• If an edge (u, v) connects two different trees, then (u, v) is added to the set of edges
of the MST, and two trees connected by an edge (u, v) are merged into a single tree.
• On the other hand, if an edge (u, v) connects two vertices in the same tree, then edge
(u, v) is discarded.

Adarsh Pal 2204310130001


PRACTICAL#10

OBJECTIVE: Implement N Queen Problem using Backtracking.

N Queen's problem:

The n-queens problem consists of placing n queens on an n x n checker board in such a way that
they do not threaten each other, according to the rules of the game of chess. Every queen on a
checker square can reach the other squares that are located on the same horizontal, vertical, and
diagonal line. So there can be at most one queen at each horizontal line, at most one queen at each
vertical line, and at most one queen at each of the 4n-2 diagonal lines. Furthermore, since we want
to place as many queens as possible, namely exactly n queens, there must be exactly one queen at
each horizontal line and at each vertical line. The concept behind backtracking algorithm which is
used to solve this problem is to successively place the queens in columns. When it is impossible to
place a queen in a column (it is on the same diagonal, row, or column as another token), the
algorithm backtracks and adjusts a preceding queen

ALGORITHM

N Queens (k, n)
//Using backtracking, this procedure prints all possible placements of n
queens //on an n x n chessboard so that they are non-attacking
{
for i ← 1 to n do
{
if(Place(k,i) )
{
x[k] ←
i if
(k=n)
write ( x[1...n])
else
Nqueens (k+1, n)
}
}
}

Adarsh Pal 2204310130001


Algorithm Place ( k, i)
//Returns true if a queen can be placed in kth row and ith column.
Otherwise it //returns false. X [] is a global array whose first (k-
1)values have been set. Abs(r) //returns the absolute value of r.
{
for j ← 1 to k-1 do
{
if ( (x[j]=i or Abs(x[j]-i) = Abs(j-k) )
{
return false

}
}
}

Complexity:

The power of the set of all possible solutions of the n queen’s problem is n! and the bounding
function takes a linear amount of time to calculate, therefore the running time of the n queens
problem is O (n!).

Adarsh Pal 2204310130001

You might also like