KCS553 - DAA Lab
KCS553 - DAA Lab
The “Design and Analysis Of Algorithm” is a field of computer science that focuses on the study of
algorithms which are step-by-step procedure or sets of instructions for solving specific
problems or tasks in a finite number of steps.
Algorithm Design involves the process of creating efficient algorithms to solve particular problems.
It's about coming up with a well-structured and logically sound sequence of steps to perform a
task. Good algorithm design aims to optimise factors like time complexity (how quickly an
algorithm runs) and space complexity (how much memory an algorithm uses).
There are various strategies for algorithm design, including divide and conquer, dynamic
programming, greedy algorithms, and more.
The importance of design and analysis of algorithms lies in optimising computational processes.
Efficient algorithms can significantly reduce the time and resources required to solve
problems, which is crucial in computer science, where efficiency is often a critical factor.
The field of algorithm design and analysis is fundamental in computer science and has broad
applications across various domains as:
➢ Problem Solving
➢ Artificial Intelligence
➢ Data Science
➢ Cryptography
➢ Network routing and many others.
1
1 2100971520035
PROGRAM - 1
OBJECTIVE : Write a Program for recursive linear search and recursive binary search.
DESCRIPTION :
1. LINEAR SEARCH
A linear search program is a simple algorithmic program that searches for a specific element in
an array or list by sequentially examining each element one by one. It starts from the
beginning and compares each element with the target value until a match is found or the
entire array is searched. Linear search is straightforward but less efficient for large
datasets, as it may require scanning through all elements in the worst case. It is easy to
implement and suitable for unsorted or small datasets where efficiency is not a critical
concern. Linear search is a fundamental algorithm used in various applications,
including searching and data retrieval tasks.
2. BINARY SEARCH
A binary search program is an efficient algorithm used to find a specific target element within a
sorted array or list. It works by repeatedly dividing the search range in half, eliminating
half of the remaining elements in each iteration. This process continues until the target
element is found or determined to be absent. Binary search is significantly faster than
linear search for large datasets because it reduces the search space exponentially with
each comparison. It has a time complexity of O(log n), making it suitable for searching
in large datasets or databases.
TIME COMPLEXITY :
1. LINEAR SEARCH :
2
2 2100971520035
2. BINARY SEARCH :
In summary, the key difference in time complexity between linear search and binary search is their
growth rate concerning the input size (n). Linear search has a linear time complexity (O(n)), making it
suitable for small datasets or unsorted data. Binary search, on the other hand, has a logarithmic time
complexity (O(log n)), which makes it significantly faster for large sorted datasets where you need to
quickly locate specific elements.
ALGORITHM :
if arr[index] == target:
return index // Element found at this index
return RecursiveLinearSearch(arr, target, index + 1, size) // Recursively search the next index
if arr[mid] == target:
return mid // Element found at mid
3
3 2100971520035
SOURCE CODE :
LAB/LinearSearch.c
4
4 2100971520035
OUTPUT :
5
5 2100971520035
2. RECURSIVE BINARY SEARCH :
LAB/BinarySearch.c
6
6 2100971520035
OUTPUT :
7
7 2100971520035
PROGRAM - 2
DESCRIPTION :
INSERTION SORT :
The insertion sort program in C is a simple sorting algorithm that organises an array of elements
in ascending order. It iterates through the array, considering one element at a time and
placing it in its correct position among the previously sorted elements. This process
continues until the entire array is sorted. Users input the number of elements and the
array values, and the program efficiently sorts the array using the insertion sort
technique. The sorted array is then displayed, making it a practical tool for arranging
data in a straightforward manner.
TIME COMPLEXITY :
INSERTION SORT :
Insertion sort is a straightforward sorting algorithm that works well for small datasets or partially
sorted data but is less efficient for large datasets. Its time complexity can be analysed as
follows:
1. Best-case Time Complexity: When the input array is already sorted in ascending order,
insertion sort has a best-case time complexity of O(n). This is because it only needs to
make one pass through the array to confirm that each element is in its correct position.
2. Average-case Time Complexity: In the average case, insertion sort has a time complexity of
O(n^2), where n is the number of elements in the array. This is because, on average, it
needs to make roughly n/2 comparisons and swaps for each element.
3. Worst-case Time Complexity: In the worst-case scenario, when the input array is sorted in
descending order, insertion sort has a time complexity of O(n^2). This occurs because,
for each element in the array, it needs to compare and move it to its correct position,
leading to a quadratic number of operations.
Insertion sort is considered an "in-place" sorting algorithm because it sorts the elements within
the original array without requiring additional memory allocation. However, its time
complexity makes it less efficient than more advanced sorting algorithms like quick sort
or merge sort for larger datasets.
While insertion sort is not the most efficient sorting algorithm for large datasets, it can still be
useful for small datasets or for situations where the data is already partially sorted, as its
best-case performance is quite good.
8
8 2100971520035
ALGORITHM :
1. Start with the second element (index 1) and consider it as the current element.
2. Compare the current element with the one before it (previous element).
3. If the current element is smaller than the previous element, swap them.
4. Repeat step 2 and 3 until the current element is in its correct sorted position among the previous
elements.
5. Move to the next unsorted element (increment index) and repeat steps 2-4.
6. Continue this process until you have processed all elements in the array.
7. The array is now sorted in ascending order.
Pseudocode:
for i from 1 to n-1:
current_element = arr[i]
j=i-1
while j >= 0 and arr[j] > current_element:
arr[j + 1] = arr[j]
j=j-1
arr[j + 1] = current_element
9
9 2100971520035
SOURCE CODE :
LAB/InsertionSort.c
10
10 2100971520035
OUTPUT :
11
11 2100971520035
PROGRAM - 3
DESCRIPTION :
QUICK SORT :
TIME COMPLEXITY :
QUICK SORT :
Quick Sort is a highly efficient and widely used sorting algorithm, known for its average-case
time complexity of O(n log n). However, its time complexity can vary depending on
various factors, including the choice of the pivot element and the input data.
1. Average-case Time Complexity (Best Case): Quick Sort's average-case time complexity is
O(n log n), which makes it one of the fastest sorting algorithms for most practical
scenarios. The algorithm efficiently divides the array into smaller subarrays and sorts
them, resulting in a balanced recursion tree.
2. Worst-case Time Complexity: In the worst-case scenario, Quick Sort can have a time
complexity of O(n^2). This occurs when the pivot element is consistently chosen
poorly, causing the algorithm to create unbalanced partitions. However, various
strategies, such as choosing a random pivot or using the "median of three" method, can
mitigate the likelihood of encountering the worst-case scenario. With these
optimisations, worst-case scenarios are rare in practice.
3. Best-case Time Complexity: The best-case time complexity of Quick Sort is also O(n log
n). This occurs when the pivot is consistently chosen such that it roughly divides the
array into two equal-sized subarrays. In such cases, Quick Sort exhibits its optimal
performance.
12
12 2100971520035
ALGORITHM :
13
13 2100971520035
SOURCE CODE :
LAB/QuickSort.c
14
14 2100971520035
OUTPUT :
15
15 2100971520035