DAA Practical File

Download as pdf or txt
Download as pdf or txt
You are on page 1of 41

List of Experiments

Date of
Exp.No Title of the Program Execution Signature
1. Programming that uses recurrence relations to analyze
recursive algorithms.
2. Computing best, average, and worst case time
complexity of various sorting techniques

3. Performance analysis of different internal and external


sorting algorithms with different type of data set.
4. Use of divide and conquer technique to solve some
problem that uses two different algorithm for solving
small problem

5. Implementation of different basic computing


algorithms like Hash tables, including collision-
avoidance strategies, Search trees (AVL and B-trees).

6. Consider the problem of eight queens on an (8x8)


chessboard. Two queens are said to attack each other if
they are on the same row, column, or diagonal. Write a
program that implements backtracking algorithm to
solve the problem i.e. place eight non-attacking queens
on the board.

7. Write a program to find the strongly connected


components in a digraph.

8. Write a program to implement file compression (and


un-compression) using Huffman‘s algorithm.

9. Write a program to implement dynamic programming


algorithm to solve the all pairs shortest path problem

10. Write a program to solve 0/1 knapsack problem using


the following:
a) Greedy algorithm.
b) Dynamic programming algorithm.
c) Backtracking algorithm.

1
d) Branch and bound algorithm.

11. Write a program that uses dynamic programming


algorithm to solve the optimal binary search tree
problem.

12. Write a program for solving traveling sales persons


problem using the following:
a) Dynamic programming algorithm.
b) The back tracking algorithm.
c) Branch and Bound.

2
Program No.1
Programming that uses recurrence relations to analyze recursive algorithms.

A recurrence relation is an equation that defines a sequence where any term is defined in terms of
its previous terms.

The recurrence relations for the time complexity of some problems are given below:

Fibonacci Number

T(N) = T(N-1) + T(N-2)


Base Conditions: T(0) = 0and T(1)= 1
Binary Search

T(N) = T(N/2) + C
Base Condition: T(1) = 1
Merge Sort

T(N) = 2 T(N/2) + CN
Base Condition: T(1) = 1
Recursive Algorithm: Finding min and max in an array

T(N) = 2 T(N/2) + 2
Base Condition: T(1) = 0and T(2)= 1
Karastuba algorithm for fast multiplication

T(N) = 3 T(N/2) + CN
Base Condition: T(0) = 1and T(1)= 1
Quick Sort

T(N) = T(i) + T(N-i-1) + CN


The time taken by quick sort depends upon the distribution of the input array and partition
strategy. T(i) and T(N-i-1) are two smaller subproblems after the partition where i is the number
of elements that are smaller than the pivot. CN is the time complexity of the partition process
where C is a constant. .

Worst Case: This is a case of the unbalanced partition where the partition process always picks
the greatest or smallest element as a pivot(Think!).For the recurrence relation of the worst case
scenario, we can put i = 0 in the above equation.

3
T(N) = T(0) + T(N-1) + CN
which is equivalent to
T(N)= T(N-1) + CN
Best Case: This is a case of the balanced partition where the partition process always picks the
middle element as pivot. For the recurrence relation of the worst case scenario, put i = N/2 in the
above equation.

T(N) = T(N/2) + T(N/2-1) + CN


which is equivalent to
T(N)= 2T(N/2) + CN
Average Case: For average case analysis, we need to consider all possible permutation of input
and time taken by each permutation.

T(N) = (for i = 0 to N-1) ∑ ( T(i) + T(N-i-1) ) / N


Note: This looks mathematically complex but we can find several other intuitive ways to analyse
the average case of quick sort.

4
Program No. 2

Computing best, average, and worst case time complexity of various sorting techniques.

Time and Space Complexities of Common Sorting Algorithms


We've covered the time and space complexities of 9 popular sorting algorithms: Bubble Sort,
Selection Sort, Insertion Sort, Merge Sort, Quick sort, Heap Sort, Counting Sort, Radix Sort, and
Bucket Sort.

1. Bubble Sort
In bubble sort, we compare each adjacent pair. If they are not in the correct order, we swap them.
We keep repeating the previous step until no swaps is needed, which indicates all the elements are
sorted.

Time Complexity:

• Worse case: O(n2)


When the array is reverse-sorted, we iterate through the array (n - 1) times. In the first iteration, we
do (n - 1) swaps, (n - 2) in the second, and so on until in the last iteration where we do only one
swap. Thus the total number of swaps sum up to n * (n - 1) / 2.

• Average case: O(n2)


For a completely random array, the total number of swaps averages out to be around n2 / 4, which
is again O(n2).

• Best case: O(n)


In the first iteration of the array, if we do not perform any swap, we know that the array is already
sorted so stop sorting, therefore the time complexity turns out to be linear.

Space Complexity:

Since we use only a constant amount of additional memory apart from the input array, the space
complexity is O(1).

2. Selection Sort

5
Selection sort is a simple sorting algorithm that divides the array into two parts: a subarray of
already sorted elements and a subarray of remaining elements to be sorted. The sorted subarray is
initially empty. We iterate over the array (n - 1) times. In each iteration, we find the smallest
element from the unsorted subarray and place it at the end of the sorted subarray.

Time Complexity:

Worst case = Average Case = Best Case = O(n2)


We perform the same number of comparisons for an array of any given size.

In the first iteration, we perform (n - 1) comparisons, (n - 2) in the second, and so on until the last
iteration where we perform only one comparison. Thus the total number of comparisons sum up to
n * (n - 1) / 2. The number of swaps performed is at most n - 1. So the overall time complexity is
quadratic.

Space Complexity:

Since we are not using any extra data structure apart from the input array, the space complexity
is O(1).

3. Insertion Sort
Like selection sort, the insertion sort algorithm also divides the array into two parts: a subarray of
already sorted elements and a subarray of remaining elements to be sorted. The sorted subarray
initially contains only the first element of the array. We iterate over each of the remaining elements
and put them in their correct position in the sorted subarray.

Time Complexity:

• Worse case: O(n2)


When we apply insertion sort on a reverse-sorted array, it will insert each element at the beginning
of the sorted subarray, making it the worst time complexity of insertion sort.

• Average case: O(n2)


When the array elements are in random order, the average running time is O(n2 / 4) = O(n2).

• Best case: O(n)


When we initiate insertion sort on an already sorted array, it will only compare each element to its
predecessor, thereby requiring n steps to sort the already sorted array of n elements.

Space Complexity:

Since we use only a constant amount of additional memory apart from the input array, the space
complexity is O(1).
6
4. Merge Sort
In merge sort, we divide the unsorted array into n subarrays, each of size one, which can be
considered sorted trivially. Then, we repeatedly merge the sorted subarrays to produce new sorted
subarrays until there is only one sorted array of size n remaining.

Time Complexity:

Worst case = Average Case = Best Case = O(n log n)

Merge sort performs the same number of operations for any input array of a given size.

In this algorithm, we keep dividing the array into two subarrays recursively which will create O(log
n) rows where each element is present in each row exactly once.

For each row, it takes O(n) time to merge every pair of subarrays. So the overall time complexity
becomes O(n log n).

Space Complexity:

Since we use an auxiliary array of size at most n to store the merged subarray, the space complexity
is O(n).

5. Quicksort
Quicksort is a relatively more complex algorithm. It uses a divide-and-conquer strategy to divide
the array into two subarrays. We choose an element called pivot and we then place it in its correct
index and then reorder the array by moving all the elements less than pivot to its left and all the
elements greater than it to its right.

We then recursively sort the subarrays of these subarrays until the entire array is sorted. The
efficiency of the quicksort algorithm heavily depends on the selection of the pivot element.

Time Complexity:

• Worse case: O(n2)


When the array is sorted and we choose the leftmost element as pivot, or the array is reverse-sorted
and we choose the rightmost element as pivot, the time complexity becomes quadratic since
partitioning the array results in highly unbalanced subarrays in such cases.

Also when there are a large number of identical elements in the array, optimal partitioning becomes
hard, resulting in quadratic time complexity.

• Average case and best case: O(n log n)


The best case for quick-sort happens when we successfully pick the median element for

7
partitioning every time. Such partitioning allows us to divide the array in half every time.

We can avoid the worst-case in quicksort almost always by choosing an appropriate pivot. There
are various ways to achieve this:

• Pick the pivot from the middle of the array


• Adopt a random selection of pivots
• Take the median of three pivot candidates, i,e., choose the median of the first, middle, and
the last elements of the array as the pivot.
These methods result in almost equal partitioning of the array, on average. This way the average
case time complexity of quicksort practically becomes O(n log n).

Space Complexity:

Although quicksort doesn’t use auxiliary space to store array elements, additional space is required
for creating stack frames in recursive calls.

• Worst case: O(n)


This happens when the pivot element is the largest or smallest element of the array in every
recursive call. The size of the subarray after partitioning will be n-1 and 1. In this case, the size of
the recursive tree will be n.

• Best case: O(log n)


This happens when the pivot element’s correct position in the partitioned array is in the middle
every time. The size of subarrays will be half the size of the original array. In this case, the
recursive tree’s size will be O(log n).

6. Heap Sort
In heap sort, we convert the array into a heap. Then we keep extracting the maximum element
from the heap and place it accordingly. Heap sort reconstructs the heap after each extraction.

Time Complexity:

Worst case = Average Case = Best Case = O(n log n)

The order of time taken by the heap sort algorithm for an array of any given size is the same.

The process of extraction in a heap structure with n elements takes logarithmic time, O(log n).
When there are n elements in the heap it takes O(log n) time to extract, then there remain (n - 1)
elements, the next extraction takes O(log (n - 1)), and so on until there is only one element in the
heap where the extraction takes O(log 1) time.

8
The total time complexity sums up to O(log n) + O(log (n -1)) + … + O(1) = O(log (n!)). The time
complexity of O(n log n) best represents this complexity in a simplified form.

Space Complexity:

Since we are not using any extra data structure, heap sort is an in-place sorting algorithm.
Therefore, its space complexity is O(1).

9
Program No.3

Performance analysis of different internal and external sorting algorithms with different type
of data set.

We consider sorting a list of records, either into ascending or descending order, based upon the
value of some field of the record we will call the sort key. The list may be contiguous and randomly
accessible (e.g., an array), or it may be dispersed and only sequentially accessible (e.g., a linked
list). The same logic applies in both cases, although implementation details will differ. When
analyzing the performance of various sorting algorithms we will generally consider two factors: -
the number of sort key comparisons that are required - the number of times records in the list must
be moved Both worst-case and average-case performance is significant.\

Internal or External?
In an internal sort, the list of records is small enough to be maintained entirely in physical memory
for the duration of the sort. In an external sort, the list of records will not fit entirely into physical
memory at once. In that case, the records are kept in disk files and only a selection of them are
resident in physical memory at any given time. We will consider only internal sorting at this time.
Sorted Lists
Sorting a list requires accessing the data elements stored in the list. For efficiency this suggests that
the sort capability should be provided via member functions of the list class in question. We will
follow that approach here, although the alternative is certainly feasible. Building upon the
LinkListT or ArrayT classes discussed earlier, we could derive sorted variants, overriding some
member functions of the base class and adding new member functions for sorting. If we do that,
what base member functions must be overridden? - The insertion functions must now take into
account the ordering of the list elements, placing each new element in the proper location. - A
number of the base member functions are unsuitable for a sorted list type and must be disabled. -
The search functions may now take advantage of the list element ordering.

Generic Linked
List Interface template class LinkListT { protected: LinkNodeT* Head; // points to head node in list
LinkNodeT* Tail; // points to tail node in list LinkNodeT* Curr; // points to "current" node in list
public: LinkListT(); LinkListT(const LinkListT& Source); LinkListT& operator=(const
LinkListT& Source); ~LinkListT(); bool isEmpty() const; bool inList() const; bool
PrefixNode(Item newData); bool AppendNode(Item newData); bool InsertAfterCurr(Item
newData); bool Advance(); void gotoHead(); void gotoTail(); bool MakeEmpty(); bool
DeleteCurrentNode(); bool DeleteValue(Item Target); Item getCurrentData() const; void
PrintList(ostream& Out);

10
Program No. 4

Use of divide and conquer technique to solve some problem that uses two different algorithm
for solving small problem.

// C++ code to demonstrate Divide and


// Conquer Algorithm#include<iostream>
# include<iostream>
using namespace std;

// function to find the maximum no.


// in a given array.
int DAC_Max(int arr[], int index, int l)
{
int max;
if(l - 1 == 0)
{
return arr[index];
}
if(index >= l - 2)
{
if(arr[index] > arr[index + 1])
return arr[index];
else
return arr[index + 1];
}
max = DAC_Max(arr, index + 1, l);
if(arr[index] > max)
return arr[index];
else
return max;
}

// Function to find the minimum no.


// in a given array
int DAC_Min(int arr[], int index, int l)
{
int min;
if(l - 1 == 0)
{
return arr[index];
}
if(index >= l - 2)
{
11
if(arr[index] < arr[index + 1])
return arr[index];
else
return arr[index + 1];
}

min = DAC_Min(arr, index + 1, l);


if(arr[index] < min)
return arr[index];
else
return min;
}

// Driver code
int main()
{
int arr[7] = { 70, 250, 50, 80, 140, 12, 14 };
int n = sizeof(arr) / sizeof(arr[0]);
int max, min;
max = DAC_Max(arr, 0, n);
min = DAC_Min(arr, 0, n);
cout << "Maximum: " << max << endl;
cout << "Minimum: " << min << endl;
return 0;
}

12
Program No. 5

Implementation of different basic computing algorithms like Hash tables, including collision-
avoidance strategies, Search trees (AVL and B-trees).

// Implementing hash table in C

#include <stdio.h>
#include <stdlib.h>

struct set
{
int key;
int data;
};
struct set *array;
int capacity = 10;
int size = 0;

int hashFunction(int key)


{
return (key % capacity);
}
int checkPrime(int n)
{
int i;
if (n == 1 || n == 0)
{
return 0;
}
for (i = 2; i < n / 2; i++)
{
if (n % i == 0)
{
return 0;
}
}
return 1;
}
int getPrime(int n)
{
if (n % 2 == 0)
{
n++;
13
}
while (!checkPrime(n))
{
n += 2;
}
return n;
}
void init_array()
{
capacity = getPrime(capacity);
array = (struct set *)malloc(capacity * sizeof(struct set));
for (int i = 0; i < capacity; i++)
{
array[i].key = 0;
array[i].data = 0;
}
}

void insert(int key, int data)


{
int index = hashFunction(key);
if (array[index].data == 0)
{
array[index].key = key;
array[index].data = data;
size++;
printf("\n Key (%d) has been inserted \n", key);
}
else if (array[index].key == key)
{
array[index].data = data;
}
else
{
printf("\n Collision occured \n");
}
}

void remove_element(int key)


{
int index = hashFunction(key);
if (array[index].data == 0)
{
printf("\n This key does not exist \n");
}
else
{
array[index].key = 0;
14
array[index].data = 0;
size--;
printf("\n Key (%d) has been removed \n", key);
}
}
void display()
{
int i;
for (i = 0; i < capacity; i++)
{
if (array[i].data == 0)
{
printf("\n array[%d]: / ", i);
}
else
{
printf("\n key: %d array[%d]: %d \t", array[i].key, i, array[i].data);
}
}
}

int size_of_hashtable()
{
return size;
}

int main()
{
int choice, key, data, n;
int c = 0;
init_array();

do
{
printf("1.Insert item in the Hash Table"
"\n2.Remove item from the Hash Table"
"\n3.Check the size of Hash Table"
"\n4.Display a Hash Table"
"\n\n Please enter your choice: ");

scanf("%d", &choice);
switch (choice)
{
case 1:

printf("Enter key -:\t");


scanf("%d", &key);
printf("Enter data -:\t");
15
scanf("%d", &data);
insert(key, data);

break;

case 2:

printf("Enter the key to delete-:");


scanf("%d", &key);
remove_element(key);

break;

case 3:

n = size_of_hashtable();
printf("Size of Hash Table is-:%d\n", n);

break;

case 4:

display();

break;

default:

printf("Invalid Input\n");
}

printf("\nDo you want to continue (press 1 for yes): ");


scanf("%d", &c);

} while (c == 1);
}

16
Program No.6

Consider the problem of eight queens on an (8x8) chessboard. Two queens are said to attack
each other if they are on the same row, column, or diagonal. Write a program that
implements backtracking algorithm to solve the problem i.e. place eight non-attacking queens
on the board.

// C program to solve N Queen Problem using backtracking

#define N 4
#include <stdbool.h>
#include <stdio.h>

// A utility function to print solution


void printSolution(int board[N][N])
{
for (int i = 0; i < N; i++) {
for (int j = 0; j < N; j++) {
if(board[i][j])
printf("Q ");
else
printf(". ");
}
printf("\n");
}
}

// A utility function to check if a queen can


// be placed on board[row][col]. Note that this
17
// function is called when "col" queens are
// already placed in columns from 0 to col -1.
// So we need to check only left side for
// attacking queens
bool isSafe(int board[N][N], int row, int col)
{
int i, j;

// Check this row on left side


for (i = 0; i < col; i++)
if (board[row][i])
return false;

// Check upper diagonal on left side


for (i = row, j = col; i >= 0 && j >= 0; i--, j--)
if (board[i][j])
return false;

// Check lower diagonal on left side


for (i = row, j = col; j >= 0 && i < N; i++, j--)
if (board[i][j])
return false;

return true;
}

// A recursive utility function to solve N


// Queen problem

18
bool solveNQUtil(int board[N][N], int col)
{
// Base case: If all queens are placed
// then return true
if (col >= N)
return true;

// Consider this column and try placing


// this queen in all rows one by one
for (int i = 0; i < N; i++) {

// Check if the queen can be placed on


// board[i][col]
if (isSafe(board, i, col)) {

// Place this queen in board[i][col]


board[i][col] = 1;

// Recur to place rest of the queens


if (solveNQUtil(board, col + 1))
return true;

// If placing queen in board[i][col]


// doesn't lead to a solution, then
// remove queen from board[i][col]
board[i][col] = 0; // BACKTRACK
}
}

19
// If the queen cannot be placed in any row in
// this column col then return false
return false;
}

// This function solves the N Queen problem using


// Backtracking. It mainly uses solveNQUtil() to
// solve the problem. It returns false if queens
// cannot be placed, otherwise, return true and
// prints placement of queens in the form of 1s.
// Please note that there may be more than one
// solutions, this function prints one of the
// feasible solutions.
bool solveNQ()
{
int board[N][N] = { { 0, 0, 0, 0 },
{ 0, 0, 0, 0 },
{ 0, 0, 0, 0 },
{ 0, 0, 0, 0 } };

if (solveNQUtil(board, 0) == false) {
printf("Solution does not exist");
return false;
}

printSolution(board);
return true;

20
}

// Driver program to test above function


int main()
{
solveNQ();
return 0;
}
Output
..Q.
Q...
...Q
.Q..
Time Complexity: O(N!)
Auxiliary Space: O(N2)

21
Program No. 7

Write a program to find the strongly connected components in a digraph.

// C++ Implementation of Kosaraju's algorithm to print all SCCs

#include <iostream>

#include <list>

#include <stack>

using namespace std;

class Graph

int V; // No. of vertices

list<int> *adj; // An array of adjacency lists

// Fills Stack with vertices (in increasing order of finishing

// times). The top element of stack has the maximum finishing

// time

void fillOrder(int v, bool visited[], stack<int>&Stack);

// A recursive function to print DFS starting from v

void DFSUtil(int v, bool visited[]);

public:

Graph(int V);

void addEdge(int v, int w);

// The main function that finds and prints strongly connected

// components

void printSCCs();

// Function that returns reverse (or transpose) of this graph

Graph getTranspose();

};

Graph::Graph(int V)

this->V = V;

adj = new list<int>[V];


22
}

// A recursive function to print DFS starting from v

void Graph::DFSUtil(int v, bool visited[])

// Mark the current node as visited and print it

visited[v] = true;

cout << v << " ";

// Recur for all the vertices adjacent to this vertex

list<int>::iterator i;

for (i = adj[v].begin(); i != adj[v].end(); ++i)

if (!visited[*i])

DFSUtil(*i, visited);

Graph Graph::getTranspose()

Graph g(V);

for (int v = 0; v < V; v++)

// Recur for all the vertices adjacent to this vertex

list<int>::iterator i;

for(i = adj[v].begin(); i != adj[v].end(); ++i)

g.adj[*i].push_back(v);

return g;

void Graph::addEdge(int v, int w)

adj[v].push_back(w); // Add w to v’s list.

void Graph::fillOrder(int v, bool visited[], stack<int>&Stack)

23
// Mark the current node as visited and print it

visited[v] = true;

// Recur for all the vertices adjacent to this vertex

list<int>::iterator i;

for(i = adj[v].begin(); i != adj[v].end(); ++i)

if(!visited[*i])

fillOrder(*i, visited, Stack);

// All vertices reachable from v are processed by now, push v

Stack.push(v);

// The main function that finds and prints all strongly connected

// components

void Graph::printSCCs()

stack<int> Stack;

// Mark all the vertices as not visited (For first DFS)

bool *visited = new bool[V];

for(int i = 0; i < V; i++)

visited[i] = false;

// Fill vertices in stack according to their finishing times

for(int i = 0; i < V; i++)

if(visited[i] == false)

fillOrder(i, visited, Stack);

// Create a reversed graph

Graph gr = getTranspose();

// Mark all the vertices as not visited (For second DFS)

for(int i = 0; i < V; i++)

visited[i] = false;

// Now process all vertices in order defined by Stack

while (Stack.empty() == false)

// Pop a vertex from stack

int v = Stack.top();

24
Stack.pop();

// Print Strongly connected component of the popped vertex

if (visited[v] == false)

gr.DFSUtil(v, visited);

cout << endl;

// Driver program to test above functions

int main()

// Create a graph given in the above diagram

Graph g(5);

g.addEdge(1, 0);

g.addEdge(0, 2);

g.addEdge(2, 1);

g.addEdge(0, 3);

g.addEdge(3, 4);

cout << "Following are strongly connected components in "

"given graph \n";

g.printSCCs();

return 0;

25
Program No. 8

Write a program to implement file compression (and un-compression) using Huffman‘s algorithm.

// Opening input file in read-only mode


int fd1 = open(“sample.txt”, O_RDONLY);
if (fd1 == -1) {
perror("Open Failed For Input File:\n");
exit(1);
}

// Creating output file in write mode


int fd2 = open(“sample - compressed.txt”,
O_WRONLY | O_CREAT, S_IRUSR | S_IWUSR);
if (fd2 == -1) {
perror("Open Failed For Output File:\n");
exit(1);
}

// Structure for tree nodes


struct Node {
char character;
int freq;
struct Node *l, *r;
};

// Structure for min heap


struct Min_Heap {
int size;
struct Node** array;
};

// Function to create min heap


struct Min_Heap* createAndBuildMin_Heap(char arr[],
int freq[],
int unique_size)
{
int i;

// Initializing heap
struct Min_Heap* Min_Heap
= (struct Min_Heap*)malloc(sizeof(struct Min_Heap));
Min_Heap->size = unique_size;
Min_Heap->array = (struct Node**)malloc(
Min_Heap->size * sizeof(struct Node*));

// Initializing the array of pointers in minheap.


// Pointers pointing to new nodes of character
// and their frequency
for (i = 0; i < unique_size; ++i) {

// newNode is a function
// to initialize new node
Min_Heap->array[i] = newNode(arr[i], freq[i]);
}

26
int n = Min_Heap->size - 1;
for (i = (n - 1) / 2; i >= 0; --i) {

// Standard function for Heap creation


Heapify(Min_Heap, i);
}

return Min_Heap;
}

// Function to build Huffman Tree


struct Node* buildHuffmanTree(char arr[], int freq[],
int unique_size)
{
struct Node *l, *r, *top;
while (!isSizeOne(Min_Heap)) {
l = extractMinFromMin_Heap(Min_Heap);
r = extractMinFromMin_Heap(Min_Heap);
top = newNode('$', l->freq + r->freq);
top->l = l;
top->r = r;
insertIntoMin_Heap(Min_Heap, top);
}
return extractMinFromMin_Heap(Min_Heap);
}

// Structure to store codes in compressed file


typedef struct code {
char k;
int l;
int code_arr[16];
struct code* p;
} code;

// Function to print codes into file


void printCodesIntoFile(int fd2, struct Node* root,
int t[], int top = 0)
{
int i;
if (root->l) {
t[top] = 0;
printCodesIntoFile(fd2, root->l, t, top + 1);
}

if (root->r) {
t[top] = 1;
printCodesIntoFile(fd2, root->r, t, top + 1);
}

if (isLeaf(root)) {
data = (code*)malloc(sizeof(code));
tree = (Tree*)malloc(sizeof(Tree));
data->p = NULL;
data->k = root->character;
tree->g = root->character;
write(fd2, &tree->g, sizeof(char));
for (i = 0; i < top; i++) {
data->code_arr[i] = t[i];
}
tree->len = top;
write(fd2, &tree->len, sizeof(int));
tree->dec
= convertBinaryToDecimal(data->code_arr, top);
27
write(fd2, &tree->dec, sizeof(int));
data->l = top;
data->p = NULL;
if (k == 0) {
front = rear = data;
k++;
}
else {
rear->p = data;
rear = rear->p;
}
}
}

// Function to compress file


void compressFile(int fd1, int fd2, unsigned char a)
{
char n;
int h = 0, i;

// Codes are written into file in bit by bit format


while (read(fd1, &n, sizeof(char)) != 0) {
rear = front;
while (rear->k != n && rear->p != NULL) {
rear = rear->p;
}
if (rear->k == n) {
for (i = 0; i < rear->l; i++) {
if (h < 7) {
if (rear->code_arr[i] == 1) {
a++;
a = a << 1;
h++;
}
else if (rear->code_arr[i] == 0) {
a = a << 1;
h++;
}
}
else if (h == 7) {
if (rear->code_arr[i] == 1) {
a++;
h = 0;
}
else {
h = 0;
}
write(fd2, &a, sizeof(char));
a = 0;
}
}
}
}
for (i = 0; i < 7 - h; i++) {
a = a << 1;
}
write(fd2, &a, sizeof(char));
}
typedef struct Tree {
char g;
int len;
int dec;
struct Tree* f;
28
struct Tree* r;
} Tree;

// Function to extract Huffman codes


// from a compressed file
void ExtractCodesFromFile(int fd1)
{
read(fd1, &t->g, sizeof(char));
read(fd1, &t->len, sizeof(int));
read(fd1, &t->dec, sizeof(int));
}

// Function to rebuild the Huffman tree


void ReBuildHuffmanTree(int fd1, int size)
{
int i = 0, j, k;
tree = (Tree*)malloc(sizeof(Tree));
tree_temp = tree;
tree->f = NULL;
tree->r = NULL;
t = (Tree*)malloc(sizeof(Tree));
t->f = NULL;
t->r = NULL;
for (k = 0; k < size; k++) {
tree_temp = tree;
ExtractCodesFromFile(fd1);
int bin[MAX], bin_con[MAX];
for (i = 0; i < MAX; i++) {
bin[i] = bin_con[i] = 0;
}
convertDecimalToBinary(bin, t->dec, t->len);
for (i = 0; i < t->len; i++) {
bin_con[i] = bin[i];
}

for (j = 0; j < t->len; j++) {


if (bin_con[j] == 0) {
if (tree_temp->f == NULL) {
tree_temp->f
= (Tree*)malloc(sizeof(Tree));
}
tree_temp = tree_temp->f;
}
else if (bin_con[j] == 1) {
if (tree_temp->r == NULL) {
tree_temp->r
= (Tree*)malloc(sizeof(Tree));
}
tree_temp = tree_temp->r;
}
}
tree_temp->g = t->g;
tree_temp->len = t->len;
tree_temp->dec = t->dec;
tree_temp->f = NULL;
tree_temp->r = NULL;
tree_temp = tree;
}
}
void decompressFile(int fd1, int fd2, int f)
{
int inp[8], i, k = 0;
unsigned char p;
29
read(fd1, &p, sizeof(char));
convertDecimalToBinary(inp, p, 8);
tree_temp = tree;
for (i = 0; i < 8 && k < f; i++) {
if (!isroot(tree_temp)) {
if (i != 7) {
if (inp[i] == 0) {
tree_temp = tree_temp->f;
}
if (inp[i] == 1) {
tree_temp = tree_temp->r;
}
}
else {
if (inp[i] == 0) {
tree_temp = tree_temp->f;
}
if (inp[i] == 1) {
tree_temp = tree_temp->r;
}
if (read(fd1, &p, sizeof(char)) != 0) {
convertDecimalToBinary(inp, p, 8);
i = -1;
}
else {
break;
}
}
}
else {
k++;
write(fd2, &tree_temp->g, sizeof(char));
tree_temp = tree;
i--;
}
}
}

30
Program No. 9 & 10

Write a program to solve 0/1 knapsack problem using the following:


a) Greedy algorithm.
b) Dynamic programming algorithm.
c) Backtracking algorithm.
d) Branch and bound algorithm.

#include<stdio.h>
#include<string.h>
intfindMax(int n1,int n2){
if(n1>n2){
return n1;
}else{
return n2;
}
}
intknapsack(int W,int wt[],int val[],int n){
int K[n+1][W+1];
for(int i =0; i<=n; i++){
for(int w =0; w<=W; w++){
if(i ==0|| w ==0){
K[i][w]=0;
}elseif(wt[i-1]<= w){
K[i][w]=findMax(val[i-1]+ K[i-1][w-wt[i-1]], K[i-1][w]);
}else{
K[i][w]= K[i-1][w];
}
}
}
return K[n][W];
}
intmain(){
int val[5]={70,20,50};
int wt[5]={11,12,13};/* A Naive recursive implementation
of 0-1 Knapsack problem */
#include <stdio.h>

// A utility function that returns


// maximum of two integers
int max(int a, int b) { return (a > b) ? a : b; }

// Returns the maximum value that can be

31
// put in a knapsack of capacity W
int knapSack(int W, int wt[], int val[], int n)
{
// Base Case
if (n == 0 || W == 0)
return 0;

// If weight of the nth item is more than


// Knapsack capacity W, then this item cannot
// be included in the optimal solution
if (wt[n - 1] > W)
return knapSack(W, wt, val, n - 1);

// Return the maximum of two cases:


// (1) nth item included
// (2) not included
else
return max(
val[n - 1]
+ knapSack(W - wt[n - 1], wt, val, n - 1),
knapSack(W, wt, val, n - 1));
}

// Driver code
int main()
{
int profit[] = { 60, 100, 120 };
int weight[] = { 10, 20, 30 };
int W = 50;
int n = sizeof(profit) / sizeof(profit[0]);
printf("%d", knapSack(W, weight, profit, n));
return 0;
}int W =30;
int len =sizeof val /sizeof val[0];
printf("Maximum Profit achieved with this knapsack: %d",knapsack(W, wt, val,
len));

Output
Maximum Profit achieved with this knapsack: 120

/* A Naive recursive implementation


of 0-1 Knapsack problem */
#include <stdio.h>

// A utility function that returns


// maximum of two integers
int max(int a, int b) { return (a > b) ? a : b; }

32
// Returns the maximum value that can be
// put in a knapsack of capacity W
int knapSack(int W, int wt[], int val[], int n)
{
// Base Case
if (n == 0 || W == 0)
return 0;

// If weight of the nth item is more than


// Knapsack capacity W, then this item cannot
// be included in the optimal solution
if (wt[n - 1] > W)
return knapSack(W, wt, val, n - 1);

// Return the maximum of two cases:


// (1) nth item included
// (2) not included
else
return max(
val[n - 1]
+ knapSack(W - wt[n - 1], wt, val, n - 1),
knapSack(W, wt, val, n - 1));
}

// Driver code
int main()
{
int profit[] = { 60, 100, 120 };
int weight[] = { 10, 20, 30 };
int W = 50;
int n = sizeof(profit) / sizeof(profit[0]);
printf("%d", knapSack(W, weight, profit, n));
return 0;
}

Output
220
220

#include <stdio.h>
#include <conio.h>
#define max 10
Advertisement

33
int w[max],i,j,p[max];
int n,m;
float unit[max];
int y[max],x[max],fp=-1,fw;
void get()
{
printf(“\n Enter total number of items: “);
scanf(“%d”,&n);
printf(“\n Enter the Maximum capacity of the Sack: “);
scanf(“%d”,&m);
for(i=0;i<n;i++)
{
printf(“\n Enter the weight of the item # %d : “,i+1);
scanf(“%d”,&w[i]);
printf(“\n Enter the profit of the item # %d : “, i+1);
scanf(“%d”, &p[i]);
}
}
Adv ert is em ents
REPORT THIS AD

void show()
{
float s=0.0;
printf(“\n\tItem\tWeight\tCost\tUnit Profit\tSelected “);
for(i=0;i<n;i++)
printf(“\n\t%d\t%d\t%d\t%f\t%d”,i+1,w[i],p[i],unit[i],x[i]);
printf(“\n\n The Sack now holds following items : “);
for(i=0;i<n;i++)
if(x[i]==1)
{
printf(“%d\t”,i+1);
s += (float) p[i] * (float) x[i];
}
printf(“\n Maximum Profit: %f “,s);
}
/*Arrange the item based on high profit per Unit*/
void sort()
{
Adv ert is em ents
REPORT THIS AD

int t,t1;
float t2;
for(i=0;i<n;i++)
unit[i] = (float) p[i] / (float) w[i];
for(i=0;i<n-1;i++)
{
for(j=i+1;j<n;j++)

34
{
if(unit[i] < unit[j])
{
t2 = unit[i];
unit[i] = unit[j];
unit[j] = t2;
t = p[i];
p[i] = p[j];
p[j] = t;
t1 = w[i];
w[i] = w[j];
Adv ert is em ents
REPORT THIS AD

w[j] =t1;
}
}
}
}
float bound(float cp,float cw,int k)
{
float b = cp;
float c = cw;
for(i=k;i<=n;i++)
{
c = c+w[i];
if( c < m)
b = b +p[i];
else
return (b+(1-(c-m)/ (float)w[i])*p[i]);
}
return b;
Adv ert is em ents
REPORT THIS AD

}
void knapsack(int k,float cp,float cw)
{
if(cw+w[k] <= m)
{
y[k] = 1;
if(k <= n)
knapsack(k+1,cp+p[k],cw+w[k]);
if(((cp+p[k]) > fp) && ( k == n))
{
fp = cp+p[k];
fw = cw+w[k];
for(j=0;j<=k;j++)
x[j] = y[j];

35
}
}
if(bound(cp,cw,k) >= fp)
{
Adv ert is em ents
REPORT THIS AD

y[k] = 0;
if( k <= n)
knapsack(k+1,cp,cw);
if((cp > fp) && (k == n))
{
fp = cp;
fw = cw;
for(j=0;j<=k;j++)
x[j] = y[j];
}
}
}
void main()
{
clrscr();
printf(“\n\n\n\t\t ******** KNAPSACK PROBLEM ********”);
printf(“\n\t\t —————————————–“);
get();
Adv ert is em ents
REPORT THIS AD

printf(“\n The Sack is arranged in the order…\n”);


sort();
knapsack(0,0.0,0.0);
show();
getch();
}
Output:
******** KNAPSACK PROBLEM ********
——————————————-
Enter total number of items: 3
Enter the Maximum capacity of the Sack: 25
Enter the weight of the item # 1 : 1
Enter the profit of the item # 1 : 11
Enter the weight of the item # 2 : 11
Enter the profit of the item # 2 : 21
Enter the weight of the item # 3 : 21
Enter the profit of the item # 3 : 31
The Sack is arranged in the order…

#include <iostream>

36
#include <algorithm>
#include <vector>
#include <queue>

using namespace std;

class Item {
public:
int value;
int weight;
double ratio;

Item(int value, int weight) {


this->value = value;
this->weight = weight;
this->ratio = (double)value / weight;
}
};

class KnapsackNode {
public:
vector<int> items;
int value;
int weight;

KnapsackNode(vector<int> items, int value, int weight) {


this->items = items;
this->value = value;
this->weight = weight;
}
};

class Knapsack {
public:
int maxWeight;
vector<Item> items;

Knapsack(int maxWeight, vector<Item> items) {


this->maxWeight = maxWeight;
this->items = items;
}

int solve() {
sort(this->items.begin(), this->items.end(), [](const Item& a, const Item& b) {
return a.ratio > b.ratio;
});

37
int bestValue = 0;
queue<KnapsackNode> q;
q.push(KnapsackNode({}, 0, 0));

while (!q.empty()) {
KnapsackNode node = q.front();
q.pop();
int i = node.items.size();

if (i == this->items.size()) {
bestValue = max(bestValue, node.value);
} else {
Item item = this->items[i];
KnapsackNode withItem(node.items, node.value + item.value,
node.weight + item.weight);
if (isPromising(withItem, this->maxWeight, bestValue)) {
q.push(withItem);
}
KnapsackNode withoutItem(node.items, node.value,
node.weight);
if (isPromising(withoutItem, this->maxWeight, bestValue)) {
q.push(withoutItem);
}
}
}

return bestValue;
}

bool isPromising(KnapsackNode node, int maxWeight, int bestValue) {


return node.weight <= maxWeight && node.value + getBound(node) >
bestValue;
}

int getBound(KnapsackNode node) {


int remainingWeight = this->maxWeight - node.weight;
int bound = node.value;

for (int i = node.items.size(); i < this->items.size(); i++) {


Item item = this->items[i];

if (remainingWeight >= item.weight) {


bound += item.value;
remainingWeight -= item.weight;
} else {

38
bound += remainingWeight * item.ratio;
break;
}
}

return bound;
}
};

output:
Best value: 220

39
Program No.11

Write a program that uses dynamic programming algorithm to solve the optimal binary
search tree problem.
// A naive recursive implementation of
// optimal binary search tree problem
#include <bits/stdc++.h>
using namespace std;

// A utility function to get sum of


// array elements freq[i] to freq[j]
int sum(int freq[], int i, int j);

// A recursive function to calculate


// cost of optimal binary search tree
int optCost(int freq[], int i, int j)
{
// Base cases
if (j < i) // no elements in this subarray
return 0;
if (j == i) // one element in this subarray
return freq[i];

// Get sum of freq[i], freq[i+1], ... freq[j]


int fsum = sum(freq, i, j);

// Initialize minimum value


int min = INT_MAX;

// One by one consider all elements


// as root and recursively find cost
// of the BST, compare the cost with
// min and update min if needed
for (int r = i; r <= j; ++r)
{
int cost = optCost(freq, i, r - 1) +
optCost(freq, r + 1, j);
if (cost < min)
min = cost;
}

// Return minimum value


return min + fsum;
}

// The main function that calculates


// minimum cost of a Binary Search Tree.
// It mainly uses optCost() to find
// the optimal cost.
int optimalSearchTree(int keys[],

int freq[], int n)

40
{
// Here array keys[] is assumed to be
// sorted in increasing order. If keys[]
// is not sorted, then add code to sort
// keys, and rearrange freq[] accordingly.
return optCost(freq, 0, n - 1);
}

// A utility function to get sum of


// array elements freq[i] to freq[j]
int sum(int freq[], int i, int j)
{
int s = 0;
for (int k = i; k <= j; k++)
s += freq[k];
return s;
}

// Driver Code
int main()
{
int keys[] = {10, 12, 20};
int freq[] = {34, 8, 50};
int n = sizeof(keys) / sizeof(keys[0]);
cout << "Cost of Optimal BST is "
<< optimalSearchTree(keys, freq, n);
return 0;
}

Output
Cost of Optimal
Optimal solution is 142

41

You might also like