Data Structure Assignment

Download as pdf or txt
Download as pdf or txt
You are on page 1of 19

Subject: Data Structure

Course Code: ICT-231


Credit Hours: 4(3-1)
Submitted by: Haiqa Naeem
Reg No: 2023-GCWUF-0442
Submitted to: Ma’am Faiza Shafaqat
Date: 15-Oct-2024

Govt. College Women University Faisalabad


Table of Content
I. Divide and Conquer Algorithm Definition
II. Working of Divide and Conquer Algorithm
III. Characteristics of Divide and Conquer Algorithm
IV. Examples of Divide and Conquer Algorithm
V. Complexity Analysis of Divide and Conquer Algorithm
VI. Applications of Divide and Conquer Algorithm
VII. Advantages of Divide and Conquer Algorithm
VIII. Disadvantages of Divide and Conquer Algorithm
IX. What Is a Radix Sort Algorithm?
X. Working of Radix Sort Algorithm
XI. Pseudocode of Radix Sort Algorithm
XII. Performance of Radix Sort Algorithm
XIII. Advantages and Disadvantages of Radix Sort
XIV. Applications of Radix Sort
XV. References
Introduction to Divide and Conquer Algorithm

Divide and Conquer Algorithm is a problem-solving technique used


to solve problems by dividing the main problem into subproblems,
solving them individually and then merging them to find solution to the
original problem. In this article, we are going to discuss how Divide
and Conquer Algorithm is helpful and how we can use it to solve
problems.
Divide and Conquer Algorithm Definition:
Divide and Conquer Algorithm involves breaking a larger problem
into smaller subproblems, solving them independently, and then
combining their solutions to solve the original problem. The basic idea
is to recursively divide the problem into smaller subproblems until they
become simple enough to be solved directly. Once the solutions to the
subproblems are obtained, they are then combined to produce the
overall solution.
Working of Divide and Conquer Algorithm:
Divide and Conquer Algorithm can be divided into three
steps: Divide, Conquer and Merge .
1. Divide:
• Break down the original problem into smaller subproblems.
• Each subproblem should represent a part of the overall problem.
• The goal is to divide the problem until no further division is
possible.
2. Conquer:
• Solve each of the smaller subproblems individually.
• If a subproblem is small enough (often referred to as the “base
case”), we solve it directly without further recursion.
• The goal is to find solutions for these subproblems independently.
3. Merge:
• Combine the sub-problems to get the final solution of the whole
problem.
• Once the smaller subproblems are solved, we recursively
combine their solutions to get the solution to the larger problem.
• The goal is to formulate a solution for the original problem by
merging the results from the subproblems.

Characteristics of Divide and Conquer Algorithm:


Divide and Conquer Algorithm involves breaking down a problem into
smaller, more manageable parts, solving each part individually, and
then combining the solutions to solve the original problem. The
characteristics of Divide and Conquer Algorithm are:
• Dividing the Problem: The first step is to break the problem into
smaller, more manageable subproblems. This division can be
done recursively until the subproblems become simple enough to
solve directly.
• Independence of Subproblems: Each subproblem should be
independent of the others, meaning that solving one subproblem
does not depend on the solution of another. This allows for
parallel processing or concurrent execution of subproblems,
which can lead to efficiency gains.
• Conquering Each Subproblem: Once divided, the subproblems
are solved individually. This may involve applying the same
divide and conquer approach recursively until the subproblems
become simple enough to solve directly, or it may involve
applying a different algorithm or technique.
• Combining Solutions: After solving the subproblems, their
solutions are combined to obtain the solution to the original
problem. This combination step should be relatively efficient and
straightforward, as the solutions to the subproblems should be
designed to fit together seamlessly.

Examples of Divide and Conquer Algorithm:


1. Finding the maximum element in the array:
We can use Divide and Conquer Algorithm to find the maximum
element in the array by dividing the array into two equal sized
subarrays, finding the maximum of those two individual halves by
again dividing them into two smaller halves. This is done till we reach
subarrays of size 1. After reaching the elements, we return the
maximum element and combine the subarrays by returning the
maximum in each subarray.
C++JavaPython3C#JavaScript
function to find the maximum no.
in a given array.
int findMax(int a[], int lo, int hi)
{
// If lo becomes greater than hi, then return minimum
// integer possible
if (lo > hi)
return INT_MIN;
// If the subarray has only one element, return the
// element
if (lo == hi)
return a[lo];
int mid = (lo + hi) / 2;
// Get the maximum element from the left half
int leftMax = findMax(a, lo, mid);
// Get the maximum element from the right half
int rightMax = findMax(a, mid + 1, hi);
// Return the maximum element from the left and right
// half
return max(leftMax, rightMax);
}
2. Finding the minimum element in the array:
Similarly, we can use Divide and Conquer Algorithm to find the
minimum element in the array by dividing the array into two equal
sized subarrays, finding the minimum of those two individual halves
by again dividing them into two smaller halves. This is done till we
reach subarrays of size 1. After reaching the elements, we return the
minimum element and combine the subarrays by returning the
minimum in each subarray.
3. Merge Sort:
We can use Divide and Conquer Algorithm to sort the array in
ascending or descending order by dividing the array into smaller
subarrays, sorting the smaller subarrays and then merging the sorted
arrays to sort the original array.
Complexity Analysis of Divide and Conquer Algorithm:
T(n) = aT(n/b) + f(n), where n = size of input a = number of
subproblems in the recursion n/b = size of each subproblem. All
subproblems are assumed to have the same size. f(n) = cost of the work
done outside the recursive call, which includes the cost of dividing the
problem and cost of merging the solutions
Applications of Divide and Conquer Algorithm:
The following are some standard algorithms that follow Divide and
Conquer algorithm:
• Quicksort is a sorting algorithm that picks a pivot element and
rearranges the array elements so that all elements smaller than the
picked pivot element move to the left side of the pivot, and all
greater elements move to the right side. Finally, the algorithm
recursively sorts the subarrays on the left and right of the pivot
element.
• MergeSort is also a sorting algorithm. The algorithm divides the
array into two halves, recursively sorts them, and finally merges
the two sorted halves.
• Closet Pair of Points The problem is to find the closest pair of
points in a set of points in the x-y plane. The problem can be
solved in O(n^2) time by calculating the distances of every pair
of points and comparing the distances to find the minimum. The
Divide and Conquer algorithm solves the problem in O(N log N)
time.
Strassen’s Algorithm is an efficient algorithm to multiply two
matrices. A simple method to multiply two matrices needs 3
nested loops and is O(n^3). Strassen’s algorithm multiplies two
matrices in O(n^2.8974) time.
• Cooley–Tukey Fast Fourier Transform (FFT) algorithm is the
most common algorithm for FFT. It is a divide and conquer
algorithm which works in O(N log N) time.
• Karatsuba algorithm for fast multiplication does the
multiplication of two binary strings in O(n1.59) where n is the
length of binary string.
Advantages of Divide and Conquer Algorithm:
• Solving difficult problems: Divide and conquer technique is a
tool for solving difficult problems conceptually. e.g. Tower of
Hanoi puzzle. It requires a way of breaking the problem into sub-
problems, and solving all of them as an individual cases and then
combining sub- problems to the original problem.
• Algorithm efficiency: The divide-and-conquer algorithm often
helps in the discovery of efficient algorithms. It is the key to
algorithms like Quick Sort and Merge Sort, and fast Fourier
transforms.
• Parallelism: Normally Divide and Conquer algorithms are used
in multi-processor machines having shared-memory systems
where the communication of data between processors does not
need to be planned in advance, because distinct sub-problems can
be executed on different processors.
• Memory access: These algorithms naturally make an efficient
use of memory caches. Since the subproblems are small enough
to be solved in cache without using the main memory that is
slower one. Any algorithm that uses cache efficiently is called
cache oblivious.
Disadvantages of Divide and Conquer Algorithm:
• Overhead: The process of dividing the problem into
subproblems and then combining the solutions can require
additional time and resources. This overhead can be significant
for problems that are already relatively small or that have a simple
solution.
• Complexity: Dividing a problem into smaller subproblems can
increase the complexity of the overall solution. This is
particularly true when the subproblems are interdependent and
must be solved in a specific order.
• Difficulty of implementation: Some problems are difficult to
divide into smaller subproblems or require a complex algorithm
to do so. In these cases, it can be challenging to implement a
divide and conquer solution.
• Memory limitations: When working with large data sets, the
memory requirements for storing the intermediate results of the
subproblems can become a limiting factor.
Radix Sort
Radix sort algorithm is a non-comparative sorting algorithm in
computer science. It avoids comparison by creating and categorizing
elements based on their radix. For elements with more than one
significant digit, it repeats the bucketing process for each digit while
preserving the previous step's ordering until all digits have been
considered.
Radix sort and bucket sort are almost equivalent; bucket sort goes from
MSD to LSD, while radix sort is capable of both "direction" (LSD or
MSD). Here you can take a look at Bucket Sort Algorithm Time
Complexity, Pseudocode and Applications.
What Is a Radix Sort Algorithm?
• Radix Sort is a linear sorting algorithm.
• Radix Sort's time complexity of O(nd), where n is the size of
the array and d is the number of digits in the largest number.
• It is not an in-place sorting algorithm because it requires extra
space.
• Radix Sort is a stable sort because it maintains the relative order
of elements with equal values.
• The Radix sort algorithm may be slower than other sorting
algorithms such as merge sort and Quicksort if the operations are
inefficient. These operations include sub-inset lists and delete
functions and the process of isolating the desired digits.
• Because it is based on digits or letters, the radix sort is less
flexible than other sorts. If the type of data changes, the Radix
sort must be rewritten.

After defining the radix sort algorithm, you will look at how it works
with an example

Working of Radix Sort Algorithm


• The Radix sort algorithm works by ordering each digit from least
significant to most significant.
• In base 10, radix sort would sort by the digits in the one's place,
then the ten's place, and so on.
• To sort the values in each digit place, Radix sort employs
counting sort as a subroutine.
• This means that for a three-digit number in base 10, counting sort
will be used to sort the 1st, 10th, and 100th places, resulting in a
completely sorted list. Here's a rundown of the counting sort
algorithm.
Assume you have an 8-element array. First, you will sort the elements
by the value of the unit place. It will then sort the elements based on
the value of the tenth position. This process is repeated until it reaches
the last significant location.
Let's start with [132, 543, 783, 63, 7, 49, 898]. It is sorted using radix
sort, as illustrated in the figure below.
• Find the array's largest element, i.e., maximum. Consider A to be
the number of digits in maximum. A is calculated because we
must traverse all of the significant locations of all elements.
The largest number in this array [132, 543, 783, 63, 7, 49, 898] is 898.
It has three digits. As a result, the loop should be extended to hundreds
of places (3 times).
• Now, go through each significant location one by one. Sort the
digits at each significant place with any stable sorting technique.
You must use counting sort for this. Sort the elements using the
unit place digits (A = 0).
• Sort the elements now by digits in the tens place.

• Finally, sort the elements by digits in the hundreds place.

Pseudocode of Radix Sort Algorithm

Radix_Sort(Array, p) // p is the number of passes


for j = 1 to p do
int count_array[10] = {0};
for i = 0 to n do
count_array[key of(Array[i]) in pass j]++ // count array stores the count of
key
for k = 1 to 10 do
count_array[k] = count_array[k] + count_array[k-1]
for i = n-1 downto 0 do
result_array[ count_array[key of(Array[i])] ] = Array[j]
//Construct the resulting array (result_array) by checking
//new Array[i] position from count_array[k]
count_array[key of(Array[i])]--
for i=0 to n do
Array[i] = result_array[i]
//The main array Array[] now contains sorted numbers based on the current
digit position.
the end for(j)
end function

Performance of Radix Sort Algorithm


The Time Complexity of Radix Sort Algorithm
• Worst-Case Time Complexity
In radix sort, the worst case is when all elements have the same number
of digits except one, which has a significantly large number of digits.
If the number of digits in the largest element equals n, the runtime is O.
(n2).
• Best Case Time Complexity
When all elements have the same number of digits, the best-case
scenario occurs. O(a(n+b)) is the best-case time complexity. If b equals
O(n), the time complexity is O. (a*n).
• Average Case Time Complexity
You considered the distribution of the number of digits in the average
case. There are 'p' passes, and each digit can have up to 'd' different
values. Because radix sort is independent of the input sequence, we can
keep n constant.
T(n) = p(n+d) is the running time of the radix sort. Using the linearity
of expectation and taking into account both sides' expectations.
Radix sort has an average case time complexity of O(p*(n+d)).
The Space Complexity of Radix Sort Algorithm
Because Radix sort employs Counting sort, which uses auxiliary arrays
of sizes n and k, where n is the number of elements in the input array
and k is the largest element among the dth place elements (ones, tens,
hundreds, and so on) of the input array. Hence, the Radix sort has a
space complexity of (n+k).
Stability of Radix Sort Algorithm
The Radix Sort algorithm is a stable sorting subroutine-based integer
sorting algorithm. It is a sorting algorithm that does not use
comparisons to sort a collection of integers. It classifies keys based on
individual digits with the same significant position and value.
Moving forward in this tutorial, you will look at some of its benefits
and drawbacks.
Advantages Radix Sort Algorithm
Following are some advantages of the radix sorting algorithm:
• Fast when the keys are short, i.e. when the array element range is
small.
• Used in suffix arrays construction algorithms such as Manber's
and the DC3 algorithm.
• Radix Sort is a stable sort because it maintains the relative order
of elements with equal values.
Disadvantages of Radix Sort Algorithm
Following are some disadvantages of the radix sorting algorithm:
• The Radix Sort algorithm is less flexible than other sorts because
it is based on digits or letters. As a result, for each different type
of data, it must be rewritten.
• Radix sort has a higher constant than other sorting algorithms.
• It takes up more space than Quicksort, which is used for in-place
sorting.
• Radix sort may be slower than other sorting algorithms such as
merge sort and Quicksort if the operations are inefficient. These
operations include sub-inset lists and delete functions, as well as
the process of isolating the desired digits.
• Because it is based on digits or letters, the radix sort is less
flexible than other sorts. If the data type must be rewritten, so
must the Radix sort.
Now that you have explored the benefits and drawbacks of the Radix
sort algorithm, look at some of its applications.
Applications of Radix Sort Algorithm
These are some applications of radix sort:
• The Radix sort algorithm is used in a typical computer, a
sequential random-access machine, multiple fields key records.
• While creating a suffix array, use the DC3 algorithm
(Kärkkäinen-Sanders-Burkhardt).
• The Radix sort algorithm locates locations where there are
numbers in extensive ranges.
Finally, in this tutorial, you will look at the code implementation of the
radix sort algorithm.
Code Implementation of Radix Sort Algorithm

#include<stdio.h>
#include<conio.h>
#include<stdlib.h>
int Max_value(int Array[], int n) // This function gives maximum value in
array[]
{
int i;
int maximum = Array[0];
for (i = 1; i < n; i++){
if (Array[i] > maximum)
maximum = Array[i];
}
return maximum;
}
void radixSortalgorithm(int Array[], int n) // Main Radix Sort sort function
{
int i,digitPlace = 1;
int result_array[n]; // resulting array
int largest = Max_value(Array, n); // Find the largest number to know
number of digits
while(largest/digitPlace >0){
int count_array[10] = {0};
for (i = 0; i < n; i++) //Store the count of "keys" or digits in count[]
count_array[ (Array[i]/digitPlace)%10 ]++;
for (i = 1; i < 10; i++)
count_array[i] += count_array[i - 1];
for (i = n - 1; i >= 0; i--) // Build the resulting array
{
result_array[count_array[ (Array[i]/digitPlace)%10 ] - 1] = Array[i];
count_array[ (Array[i]/digitPlace)%10 ]--;
}
for (i = 0; i < n; i++) // numbers according to current digit place
Array[i] = result_array[i];
digitPlace *= 10; // Move to next digit place
}
}
void displayArray(int Array[], int n) // Function to print an array
{
int i;
for (i = 0; i < n; i++)
printf("%d ", Array[i]);
printf("\n");
}
int main()
{
int array1[] = {20,30,40,90,60,100,50,70};
int n = sizeof(array1)/sizeof(array1[0]);
printf("Unsorted Array is : ");
displayArray(array1, n);
radixSortalgorithm(array1, n);
printf("Sorted Array is: ");
displayArray(array1, n);
return 0;
}

Output
References
1. Cormen, T. H., Leiserson, C. E., Rivest, R. L., & Stein, C. (2009). Introduction
to Algorithms (3rd ed.). MIT Press.
2. Aho, A. V., Hopcroft, J. E., & Ullman, J. D. (1974). The Design and Analysis
of Computer Algorithms. Addison-Wesley.
3. Sedgewick, R., & Wayne, K. (2011). Algorithms (4th ed.). Addison-Wesley.
4. Kleinberg, J., & Tardos, É. (2005). Algorithm Design. Pearson Education.
5. Cormen, T. H., Leiserson, C. E., Rivest, R. L., & Stein, C.
(2009). Introduction to Algorithms (3rd ed.). MIT Press.
6. Sedgewick, R., & Wayne, K. (2011). Algorithms (4th ed.). Addison-Wesley.
7. Knuth, D. E. (1998). The Art of Computer Programming, Volume 3: Sorting
and Searching (2nd ed.). Addison-Wesley.
8. MIT OpenCourseWare. Radix Sort Algorithms. Massachusetts Institute of
Technology. Available at: https://ocw.mit.edu

You might also like