0% found this document useful (0 votes)
4 views38 pages

DS Unit-1

The document provides an overview of data structures, including their classifications into primitive and non-primitive types, and further into linear and non-linear structures. It discusses specific examples such as stacks, queues, trees, and graphs, along with their operations and applications. Additionally, it covers abstract data types, time and space complexity, and the importance of linear data structures in computer science.

Uploaded by

vicky143244
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views38 pages

DS Unit-1

The document provides an overview of data structures, including their classifications into primitive and non-primitive types, and further into linear and non-linear structures. It discusses specific examples such as stacks, queues, trees, and graphs, along with their operations and applications. Additionally, it covers abstract data types, time and space complexity, and the importance of linear data structures in computer science.

Uploaded by

vicky143244
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 38

UNIT-1

……………………………………………………………………………………………………………….
Introduction :
A data structure is an arrangement of data either in computer's memory or on the disk storage.
Some common examples of data structures are arrays, linked lists, queues, stacks, binary trees, graphs.
Data structures are widely applied in areas like:
 Compiler design
 Operating system
 Statistical analysis package
 DBMS
 Numerical analysis

Classification of Data Structures:

 Primitive data structures are the fundamental data types which are supported by a programming
language. Some basic data types are integer, real, and boolean. The terms ‘data type’, ‘basic data type’,
and ‘primitive data type’ are often used interchangeably.
 Non-primitive data structures are those data structures which are created using primitive data structures.
Examples of such data structures include linked lists, stacks, trees, and graphs.
 Non-primitive data structures can further be classified into two categories: linear and non-linear data
structures.
 If the elements of a data structure are stored in a linear or sequential order, then it is a linear data
structure. Examples are arrays, linked lists, stacks, and queues.
 If the elements of a data structure are not stored in a sequential order, then it is a non-linear data
structure. Examples are trees and graphs.

Examples of Linear data structure: arrays, linked lists, stacks, and queues.
ARRAYS:

LINKED LISTS:

STACK:
 A stack is a linear data structure in which insertion and deletion of elements are done at only one
end, which is known as the top of the stack.
 Stack is called a last-in, first-out (LIFO)
structure because the last element which is
added to the stack is the first element which
is deleted from the stack.
 Stacks can be implemented using arrays or
linked lists.
 Every stack has a variable top associated
with it. Top is used to store the address of
the topmost element of the stack.
 It is this position from where the element will be added or deleted. There is another variable MAX,
which is used to store the maximum number of elements that the stack can store.
 If top = NULL, then it indicates that the stack is empty and if top = MAX–1, then the stack is full.
 A stack supports three basic operations: push, pop, and peep. The push operation adds an element
to the top of the stack. The pop operation removes the element from the top of the stack. And the
peep operation returns the value of the topmost element of the stack (without deleting it).

Queues:
 A Queue is a linear data structure in which insertion can be done at rear end and deletion of
elements can be dome at front end.
 A queue is a first-in, first-out (FIFO) data
structure in which the element that is
inserted first is the first one to be taken out.
 Like stacks, queues can be implemented by using either arrays or linked lists.

Insert element into the Queue:

Delete element from Queue:

 A queue is full when rear = MAX – 1, An underflow condition occurs when we try to delete an
element from a queue that is already empty. If front = NULL and rear = NULL, then there is no
element in the queue.
Trees:
 A tree is a non-linear data structure which consists of a collection of nodes arranged in a
hierarchical order.
 One of the nodes is designated as the root node, and the remaining nodes can be partitioned into
disjoint sets such that each set is a sub-tree of the root
 The simplest form of a tree is a binary tree. A binary tree
consists of a root node and left and right sub-trees, where both
sub-trees are also binary trees.
 Each node contains a data element, a left pointer which points
to the left sub-tree, and a right pointer which points to the right
sub-tree.
 The root element is the topmost node which is pointed by a
‘root’ pointer. If root = NULL then the tree is empty.

 Here R is the root node and T1 and T2 are the left and right subtrees of R. If T1 is non-empty,
then T1 is said to be the left successor of R. Likewise, if T2 is non-empty, then it is called the
right successor of R.
Advantage: Provides quick search, insert, and delete operations
Disadvantage: Complicated deletion algorithm
Graphs:
 A graph is a non-linear data structure which is a collection of vertices (also called nodes) and
edges that connect these vertices.
 A node in the graph may represent a city and the edges connecting
the nodes can represent roads.
 A graph can also be used to represent a computer network where
the nodes are workstations and the edges are the network
connections.
 Graphs do not have any root node. Rather, every node in the graph can be connected with every
another node in the graph.
Advantage: Best models real-world situations
Disadvantage: Some algorithms are slow and very complex

Difference between Linear and Non-linear Data Structures:

S.NO Linear Data Structure Non-linear Data Structure


In a linear data structure, data elements are
arranged in a linear order where each and every In a non-linear data structure, data elements are
1.
element is attached to its previous and next attached in hierarchically manner.
adjacent.
Whereas in non-linear data structure, multiple
2. In linear data structure, single level is involved.
levels are involved.
Its implementation is easy in comparison to non- While its implementation is complex in
3.
linear data structure. comparison to linear data structure.
In linear data structure, data elements can be While in non-linear data structure, data elements
4.
traversed in a single run only. can’t be traversed in a single run only.
S.NO Linear Data Structure Non-linear Data Structure
While in a non-linear data structure, memory is
In a linear data structure, memory is not utilized in
5. utilized in an efficient way.
an efficient way.
Its examples are: array, stack, queue, linked list,
6. While its examples are: trees and graphs.
etc.
Applications of linear data structures are mainly in Applications of non-linear data structures are in
7.
application software development. Artificial Intelligence and image processing.
Non-linear data structures are useful for
Linear data structures are useful for simple data representing complex relationships and data
8.
storage and manipulation. hierarchies, such as in social networks, file
systems, or computer networks.
Performance is usually good for simple operations
Performance can vary depending on the structure
like adding or removing at the ends, but slower for
9. and the operation, but can be optimized for
operations like searching or removing elements in
specific operations.
the middle.

OPERATIONS ON DATA STRUCTURES:

This section discusses the different operations that can be performed on the various data
structures previously mentioned.
 Traversing It means to access each data item exactly once so that it can be processed. For example,
to print the names of all the students in a class.
 Searching It is used to find the location of one or more data items that satisfy the given constraint.
Such a data item may or may not be present in the given collection of data items. For example, to
find the names of all the students who secured 100 marks in mathematics.
 Inserting It is used to add new data items to the given list of data items. For example, to add the
details of a new student who has recently joined the course.
 Deleting It means to remove (delete) a particular data item from the given collection of data items.
For example, to delete the name of a student who has left the course.
 Sorting Data items can be arranged in some order like ascending order or descending order
depending on the type of application. For example, arranging the names of students in a class in
an alphabetical order, or calculating the top three winners by arranging the participants’ scores in
descending order and then extracting the top three.
 Merging Lists of two sorted data items can be combined to form a single list of sorted data items

Importance of Linear Data Structures:


Linear data structures play a crucial role in computer science and software development. Their
importance can be understood from multiple perspectives:
1. Efficient Data Organization
Linear data structures arrange elements in a sequential manner, making them easy to store, access, and
manipulate.
Examples: Arrays, Linked Lists, Stacks, and Queues.
2. Simplified Data Access
Elements can be accessed using indexing (e.g., arrays) or pointers (e.g., linked lists).
Sequential access ensures predictable performance in operations like searching and traversal.
3. Memory Management
Some linear structures (e.g., linked lists) provide dynamic memory allocation, reducing memory
wastage.
Arrays use contiguous memory, leading to efficient caching and faster access.
4. Essential for Algorithm Implementation
Many algorithms rely on linear structures for data storage and processing.
Searching algorithms (e.g., Linear Search, Binary Search) and sorting algorithms (e.g., Bubble Sort, Merge Sort)
operate on linear structures.
5. Supports Core Computing Operations
Stacks: Used in function calls, recursion, expression evaluation, and undo/redo functionality.
Queues: Used in scheduling tasks, buffering data streams, and handling requests in operating systems.
6. Foundation for Advanced Data Structures
More complex structures like trees, graphs, and hash tables build upon linear data structures.
Example: Linked lists form the basis for implementing adjacency lists in graph structures
7. Real-World Applications
• Arrays: Used in databases, image processing, and data storage.
• Linked Lists: Used in navigation systems, dynamic memory management, and undo operations.
• Stacks: Used in expression evaluation, parsing, and backtracking algorithms.
• Queues: Used in process scheduling, message handling, and real-time data streaming.

ABSTRACT DATA TYPE:


FEATURES OF ADT:
• Abstraction: The user does not need to know the implementation of the data
structure only essentials are provided.
• Better Conceptualization: ADT gives us a better conceptualization of the real
world.
• Robust: The program is robust and has the ability to catch errors.
• Encapsulation: ADTs hide the internal details of the data and provide a public
interface for users to interact with the data. This allows for easier maintenance
and modification of the data structure.
• Data Abstraction: ADTs provide a level of abstraction from the
implementation details of the data. Users only need to know the operations that
can be performed on the data, not how those operations are implemented.
• Data Structure Independence: ADTs can be implemented using different data
structures, such as arrays or linked lists, without affecting the functionality of
the ADT.
• Information Hiding: ADTs can protect the integrity of the data by allowing
access only to authorized users and operations. This helps prevent errors and
misuse of the data.
• Modularity: ADTs can be combined with other ADTs to form larger, more
complex data structures. This allows for greater flexibility and modularity in
programming.

Advantages of ADT
• Encapsulation: ADTs provide a way to encapsulate data and operations into a
single unit, making it easier to manage and modify the data structure.
• Abstraction: ADTs allow users to work with data structures without having to
know the implementation details, which can simplify programming and reduce
errors.
• Data Structure Independence: ADTs can be implemented using different data
structures, which can make it easier to adapt to changing needs and requirements.
• Information Hiding: ADTs can protect the integrity of data by controlling
access and preventing unauthorized modifications.
• Modularity: ADTs can be combined with other ADTs to form more complex
data structures, which can increase flexibility and modularity in programming.
Disadvantages of ADT
• Overhead: Implementing ADTs can add overhead in terms of memory and
processing, which can affect performance.
• Complexity: ADTs can be complex to implement, especially for large and
complex data structures.
• Learning Curve: Using ADTs requires knowledge of their implementation and
usage, which can take time and effort to learn.
• Limited Flexibility: Some ADTs may be limited in their functionality or may
not be suitable for all types of data structures.
• Cost: Implementing ADTs may require additional resources and investment,
which can increase the cost of development
OVERVIEW OF TIME AND SPACE COMPLEXITY:

Analyzing an algorithm means determining the amount of


resources (such as time and memory) needed to execute it. Algorithms are
generally designed to work with an arbitrary number of inputs, so the efficiency
or complexity of an algorithm is stated in terms of time and space complexity.
The time complexity of an algorithm is basically the
running time of a program as a function of the input size. Similarly, the space
complexity of an algorithm is the amount of computer memory that is required
during the program execution as a function of the input size.
In other words, the number of machine instructions which a program executes
is called its time complexity. This number is primarily dependent on the size of
the program’s input and the algorithm used.

Time Complexity:
The amount of time required for an algorithm to complete its execution is its time complexity.
An algorithm is said to be efficient if it takes the minimum (reasonable) amount of time to
complete its execution.
The number of steps any problem statement is assigned depends on the kind of statement.
For example, comments 0 steps.

1. We introduce a variable, count into the program statement to increment count with initial
value 0.Statement to increment count by the appropriate amount are introduced into the
program.
This is done so that each time a statement in the original program is executes count is incremented
by the step count of that statement.
Algorithm:
Algorithm sum(a,n)
{
s= 0.0;
count = count+1;
for I=1 to n do 8
{
count =count+1;
s=s+a[I];
count=count+1;
}
count=count
+1;
count=count
+1; return s;
}

If the count is zero to start with, then it will be 2n+3 on termination. So each invocation
of sum execute a total of 2n+3 steps.
2. The second method to determine the step count of an algorithm is to build a table in
which we list the total number of steps contributes by each statement.
First determine the number of steps per execution (s/e) of the statement and the total
number of times (ie., frequency) each statement is executed.
By combining these two quantities, the total contribution of all statements, the step count for
the entire algorithm is obtained.
Statement S/e Frequency Total
1. Algorithm 0 - 0
Sum(a,n) 0 - 0
2.{ 1 1 1
3. S=0.0; 1 n+1 n+1
4. for I=1 to n do 1 n n
5. s=s+a[I]; 1 1 1
6. return s; 0 - 0
7. }
Total 2n+3

Space Complexity:
The amount of space occupied by an algorithm is known as Space Complexity. An
algorithm is said to be efficient if it occupies less space and required the minimum amount
of time to complete its execution.

Fixed part:
It varies from problem to problem. It includes the space needed for storing
instructions, constants, variables, and structured variables (like arrays and structures ).
Variable part:
It varies from program to program. It includes the space needed for recursion stack,
and for structured variables that are allocated space dynamically during the runtime of a
program.

The space requirement s(p) of any algorithm p may therefore be written as,
S(P) = c+ Sp(Instance
characteristics) Where ‘c’ is a
constant

What to Analyze in an algorithm:


An Algorithm can require different times to solve different problems of same size
1. Worst case: Maximum amount of time that an algorithm require to solve a
problem of size ‘n’. Normally we can take upper bound as complexity. We try to
find worst case behavior.
2. Best case: Minimum amount of time that an algorithm require to solve a
problem of size ‘n’. Normally it is not much useful.
3. Average case: the average amount of time that an algorithm require to solve a
problem of size ‘n’. Some times it is difficult to find. Because we have to check all
possible data organizations
.
SORTINGS:

 Definition: Sorting is a technique to rearrange the list of records(elements) either in


ascending or descending order, Sorting is performed according to some key value of
each record.
Categories of Sorting:
The sorting can be divided into two categories. These are:
 Internal Sorting
 External Sorting

 Internal Sorting: When all the data that is to be sorted can be accommodated at a
time in the main memory (Usually RAM). Internal sortings has five different
classifications: insertion, selection, exchanging, merging, and distribution sort 
 External Sorting: When all the data that is to be sorted can’t be accommodated in
the memory (Usually RAM) at the same time and some have to be kept in auxiliary
memory such as hard disk, floppy disk, magnetic tapes etc.
Ex: Natural, Balanced, and Polyphase.

BUBBLE SORT:

 Bubble Sort is also called as Exchange Sort


 In Bubble Sort, each element is compared with its adjacent element
a) If he first element is larger than the second element then the position of the
elements are interchanged.
b) Otherwise, the position of the elements are not changed.
c) The same procedure is repeated until no more elements are left for comparison.
 After the 1st pass the largest
element is placed at (N-1)th
location. Given a list of n
elements, the bubble sort
requires up to n – 1 passes to sort
the data.


Bubble sort works on the repeatedly swapping of adjacent elements until they are not in the
intended order. It is called bubble sort because the movement of array elements is just like the
movement of air bubbles in the water. Bubbles in water rise up to the surface; similarly, the
array elements in bubble sort move to the end in each iteration.

Although it is simple to use, it is primarily used as an educational tool because the performance
of bubble sort is poor in the real world. It is not suitable for large data sets. The average and
worst-case complexity of Bubble sort is O(n2), where n is a number of items.

Bubble short is majorly used where -

o complexity does not matter


o simple and shortcode is preferred

Algorithm
In the algorithm given below, suppose arr is an array of n elements. The
assumed swap function in the algorithm will swap the values of given array elements.

1. begin BubbleSort(arr)
2. for all array elements
3. if arr[i] > arr[i+1]
4. swap(arr[i], arr[i+1])
5. end if
6. end for
7. return arr
8. end BubbleSort

logic on this bubble sort :

for (int i = 0; i < n - 1; i++) {


for (int j = 0; j < n - i - 1; j++) {
if (arr[j] > arr[j + 1]) {
int temp = arr[j];
arr[j] = arr[j + 1];
arr[j + 1] = temp;
}
}
}

Working of Bubble sort Algorithm


Now, let's see the working of Bubble sort Algorithm.

To understand the working of bubble sort algorithm, let's take an unsorted array. We are taking
a short and accurate array, as we know the complexity of bubble sort is O(n2).

Let the elements of array are -

Total no of elements 5 so n-1 i.e 4 passes are required to sort the


array.

First Pass
Sorting will start from the initial two elements. Let compare them to check which is greater.

Here, 32 is greater than 13 (32 > 13), so it is already sorted. Now, compare 32 with 26.

Here, 26 is smaller than 36. So, swapping is required. After swapping new array will look like
-

Now, compare 32 and 35.

Here, 35 is greater than 32. So, there is no swapping required as they are already sorted.

Now, the comparison will be in between 35 and 10.

Here, 10 is smaller than 35 that are not sorted. So, swapping is required. Now, we reach at
the end of the array. After first pass, the array will be -
Now, move to the second iteration.

Second Pass
The same process will be followed for second iteration.

Here, 10 is smaller than 32. So, swapping is required. After swapping, the array will be -

Now, move to the third iteration.

Third Pass
The same process will be followed for third iteration.

Here, 10 is smaller than 26. So, swapping is required. After swapping, the array will be -

Now, move to the fourth iteration.

Fourth pass
Similarly, after the fourth iteration, the array will be -
Hence, there is no swapping required, so the array is completely sorted.

Bubble sort complexity


Now, let's see the time complexity of bubble sort in the best case, average case, and worst
case. We will also see the space complexity of bubble sort.

Best Case O(n)

Average Case O(n2)

Worst Case O(n2)

o Best Case Complexity - It occurs when there is no sorting required, i.e. the array is already
sorted. The best-case time complexity of bubble sort is O(n).
o Average Case Complexity - It occurs when the array elements are in jumbled order that
is not properly ascending and not properly descending. The average case time complexity
of bubble sort is O(n2).
o Worst Case Complexity - It occurs when the array elements are required to be sorted in
reverse order. That means suppose you have to sort the array elements in ascending order,
but its elements are in descending order. The worst-case time complexity of bubble sort
is O(n2).

2. Space Complexity
Space Complexity O(1)

Stable YES

o The space complexity of bubble sort is O(1). It is because, in bubble sort, an extra variable
is required for swapping.
o The space complexity of optimized bubble sort is O(2). It is because two extra variables are
required in optimized bubble sort.

Example:2
 We take an unsorted array for our example.

 Bubble sort starts with very first two elements, comparing them to check which one is
greater.

 In this case, value 33 is greater than 14, so it is already in sorted locations. Next, we
compare 33 with 27. We find that 27 is smaller than 33 and these two values must
be swapped.

 Next we compare 33 and 35. We find that both are in already sorted positions.

 Then we move to the next two values, 35 and 10. We know then that 10 is smaller 35.

 We swap these values. We find that we have reached the end of the array. After one
iteration, the array should look like this −

 To be defined, we are now showing how an array should look like after each
iteration. After the second iteration, it should look like this

 Notice that after each iteration, at least one value moves at the end.

 And when there's no swap required, bubble sorts learns that an array is completely
sorted.

Advantages of Bubble Sort:


 Bubble sort is easy to understand and implement.
 It does not require any additional memory space.
 It is a stable sorting algorithm, meaning that elements with the same key
value maintain their relative order in the sorted output.
Disadvantages of Bubble Sort:
 Bubble sort has a time complexity of O(n2) which makes it very slow for
large data sets.
 Bubble sort has almost no or limited real world applications. It is mostly
used in academics to teach different ways of sorting.

Example program : Implementation of bubble sort using c


Code:

#include <stdio.h>

void bubble_sort(int arr[], int n) {


for (int i = 0; i < n - 1; i++) {
for (int j = 0; j < n - i - 1; j++) {
if (arr[j] > arr[j + 1]) {
int temp = arr[j];
arr[j] = arr[j + 1];
arr[j + 1] = temp;
}
}
}
}

int main() {
int arr[] = {5, 3, 8, 4, 2};
int n = sizeof(arr) / sizeof(arr[0]);

bubble_sort(arr, n);

for (int i = 0; i < n; i++) {


printf("%d ", arr[i]);
}

return 0;
}

Output:

2,3,4,5,8

INSERTION SORT:

Insertion Sort is a simple sorting algorithm that builds the final sorted array one item at a time. It
works much like the way you might sort playing cards in your hands—by inserting each card into
its correct position relative to the cards already sorted.

 In Insertion sort the list can be divided into two parts, one is sorted list and other is
unsorted list. In each pass the first element of unsorted list is transfers to sorted list by
inserting it in appropriate position or proper place.
 The similarity can be
understood from the style we
arrange a deck of cards. This
sort works on the principle of
inserting an element at a
particular position, hence the
name Insertion Sort.
Following are the steps involved in insertion sort:
1. We start by taking the second element of the given array, i.e. element at index
1, the key. The key element here is the new card that we need to add to our existing
sorted set of cards
2. We compare the key element with the element(s) before it, in this case, element at index
0:
o If the key element is less than the first element, we insert the key element
before the first element.
o If the key element is greater than the first element, then we insert it after the first
element.
3. Then, we make the third element of the array as key and will compare it with elements
to it's left and insert it at the proper position.
4. And we go on repeating this, until the array is sorted.
Example 1:

example 2:
How It Works:

1. The algorithm starts with the second element (since a single-element array is trivially sorted).
2. It compares the current element with the elements in the sorted part of the array (to the left).
3. It shifts the elements of the sorted sublist to the right to make space for the current element.
4. It inserts the current element into its correct position.
5. The process repeats until the entire array is sorted.

Algorithm Steps:

1. Start from the second element (index 1).


The first element is considered as a sorted sublist, and we will insert the second element
into the sorted sublist.
2. Compare the current element with the elements of the sorted sublist (left side).
For each element in the sorted part, if the current element is smaller than the element being
compared, shift the larger element one position to the right.
3. Insert the current element into the correct position in the sorted sublist.
4. Repeat the process for the next element in the unsorted part of the array.
5. After all elements are placed in their correct positions, the array will be sorted.

Insertion Sort Pseudocode:

insertionSort(arr):

for i = 1 to length(arr) - 1: // Start from the second element


key = arr[i] // Store the current element
j = i - 1 // Compare with the element before it

// Shift elements of the sorted sublist that are greater than the key
while j >= 0 and arr[j] > key:
arr[j + 1] = arr[j] // Shift element to the right
j = j - 1
// Insert the key into the correct position
arr[j + 1] = key

Advantages of Insertion Sort:

 Simple and intuitive.


 Works well for small or nearly sorted arrays.
 Efficient when the input is nearly sorted or when only a few elements are out of order.
 It is adaptive, meaning it can perform well if the array is already partially sorted.
 In-place sorting (doesn't require extra memory).

Disadvantages of Insertion Sort:

 Inefficient for large datasets because of its O(n²) time complexity.


 The algorithm is generally slower compared to more advanced algorithms like Merge Sort, Quick
Sort, etc., for large or unsorted arrays.

Example2:
Complexity of the Insertion Sort Algorithm
To sort an unsorted list with 'n' number of elements, we need to make (1+2+3+......+n-1) = (n (n-
1))/2 number of comparisions in the worst case. If the list is already sorted then it requires 'n' number
of comparisions.
WorstCase: O(n2)
BestCase: Ω(n)
Average Case : Θ(n2)

Example program for insertion sort


#include <stdio.h>

void insertionSort(int arr[], int n) {

for (int i = 1; i < n; i++) {

int key = arr[i], j = i - 1;

while (j >= 0 && arr[j] > key) {

arr[j + 1] = arr[j];

j--;

arr[j + 1] = key;

int main() {

int arr[] = {12, 11, 13, 5, 6};

int n = sizeof(arr) / sizeof(arr[0]);

insertionSort(arr, n);

for (int i = 0; i < n; i++) printf("%d ", arr[i]);

return 0;

Output:

5 6 11 12 13
SELECTION SORT:

 Given a list of data to be sorted, we simply select the smallest item and place it in a
sorted list. These steps are then repeated until we have sorted all of the data.
 In first step, the smallest element is search in the list, once the smallest element is
found, it is exchanged with the element in the first position.
 Now the list is divided into
two parts. One is sorted list
other is unsorted list. Find
out the smallest element in
the unsorted list and it is
exchange with the starting
position of unsorted list,
after that it will added in to
sorted list.
 This process is repeated until all the elements
are sorted. Ex: asked to sort a list on paper.

Step by Step Process


The selection sort algorithm is performed using the following steps...

 Step 1 - Select the first element of the list (i.e., Element at first position in the list).
 Step 2: Compare the selected element with all the other elements in the list.
 Step 3: In every comparision, if any element is found smaller than the selected element (for
Ascending order), then both are swapped.
 Step 4: Repeat the same procedure with element in the next position in the list till the entire list
is sorted.

Algorithm:
Selection Sort Algorithm (With SMALLEST subroutine)

This algorithm works by repeatedly finding the smallest element from the unsorted part of the
array and swapping it with the first unsorted element.

Main Algorithm: SELECTION SORT


SELECTION SORT(ARR, N)
Step 1: Repeat Steps 2 and 3 for K = 1 to N-1
Step 2: CALL SMALLEST(ARR, K, N, Loc)
Step 3: SWAP A[K] with ARR[Loc]
Step 4: EXIT
Steps in Detail:

1. Initialization: You begin by iterating from K = 1 to N-1. This means you start with the second
element and move towards the last element.
2. CALL SMALLEST: For each K, you call the SMALLEST function to find the smallest element
from index K to N.
3. SWAP: After finding the smallest element (at Loc), you swap it with the element at index K.
4. Exit: Once all the elements are processed, the array is sorted.

SMALLEST Algorithm

This subroutine finds the smallest element in the unsorted part of the array, starting from index K.

SMALLEST(ARR, K, N, Loc)
Step 1: [INITIALIZE] SET Min = ARR[K]
Step 2: [INITIALIZE] SET Loc = K
Step 3: Repeat for J = K+1 to N
IF Min > ARR[J]
SET Min = ARR[J]
SET Loc = J
[END OF IF]
[END OF LOOP]
Step 4: RETURN Loc
Steps in Detail:

1. Initialization: Set Min to the value at ARR[K] and Loc to K (the initial position of the smallest
element).
2. Find Minimum: Iterate from J = K+1 to N and compare each element with Min. If an element is
smaller than Min, update Min and set Loc to the new index J.
3. Return: Once the loop completes, return the location Loc of the smallest element found.

Final Flow:

1. The main algorithm (SELECTION SORT) iterates through the array and calls SMALLEST to find the
minimum element in the unsorted part.
2. Once the smallest element is found, it's swapped with the element at position K (i.e., the first
unsorted element).
3. The process repeats until the entire array is sorted.

Example_1

Let’s consider the array [64, 25, 12, 22, 11].

1. First Pass (K = 1):


o The smallest element from [64, 25, 12, 22, 11] is 11 (found using the SMALLEST
algorithm).
o Swap 11 with the element at index 1. The array becomes [11, 25, 12, 22, 64].
2. Second Pass (K = 2):
o The smallest element from [25, 12, 22, 64] is 12.
o Swap 12 with 25. The array becomes [11, 12, 25, 22, 64].
3. Third Pass (K = 3):
o The smallest element from [25, 22, 64] is 22.
o Swap 22 with 25. The array becomes [11, 12, 22, 25, 64].
4. Fourth Pass (K = 4):
o The smallest element from [25, 64] is 25.
o No swap is needed as the element is already in the correct place.
5. The array is now sorted: [11, 12, 22, 25, 64].

Example 2:
Example : Consider the elements 23,78,45,8,32,56

Complexity of the Selection Sort Algorithm


To sort an unsorted list with 'n' number of elements, we need to make ((n-1)+(n-2)+(n-3)+......+1) = (n
(n-1))/2 number of comparisions in the worst case. If the list is already sorted then it requires 'n' number
of comparisions.
WorstCase: O(n2)
BestCase: Ω(n2)
Average Case : Θ(n2)

……………………………………………………………………………………………….
EXAMPLE PROGRAM FOR Selection Sort:

#include <stdio.h>

void selectionSort(int arr[], int n) {


for (int i = 0; i < n - 1; i++) {
int minIndex = i;
for (int j = i + 1; j < n; j++) {
if (arr[j] < arr[minIndex]) {
minIndex = j;
}
}
// Swap the found minimum element with the first element
int temp = arr[i];
arr[i] = arr[minIndex];
arr[minIndex] = temp;
}
}

int main() {
int arr[] = {64, 25, 12, 22, 11};
int n = sizeof(arr) / sizeof(arr[0]);

selectionSort(arr, n);

// Print sorted array


for (int i = 0; i < n; i++) {
printf("%d ", arr[i]);
}
return 0;
}

Output:

11 12 22 25 64

MERGE SORT:

Merge sort is one of the most efficient sorting algorithms. It works on


the principle of Divide and Conquer. Merge sort repeatedly breaks down a list into several
sublists until each sublist consists of a single element and merging those sublists in a manner
that results into a sorted list.
Divide and Conquer:
 Divide and Conquer is an algorithmic pattern. In algorithmic methods, the design is
to take a dispute on a huge input, break the input into minor pieces, decide the problem
on each of the small pieces, and then merge the piecewise solutions into a global
solution. This mechanism of solving the problem is called the Divide & Conquer
Strategy.
 Divide and Conquer algorithm consists of a
dispute using the following three steps.
1. Divide the original problem into a set of sub-problems.
2. Conquer: Solve every sub-problem
individually, recursively.
3. Combine: Put together the solutions of the
sub-problems to get the solution to the whole
problem.

Implementation Recursive Merge Sort:


 The merge sort starts at the Top and proceeds downwards, “split the array into
two, make a recursive call, and merge the results.”, until one gets to the bottom of
the array-tree.
Example: Let us consider an example to understand the approach better.
1. Divide the unsorted list into n sub-lists based on mid value, each array consisting 1
element
2. Repeatedly merge sub-lists to produce newly sorted sub-lists until there is only
1 sub-list remaining. This will be the sorted list
Recursive Mere Sort Example:

Example 2:
Merge Algorithm:
Step 1: set i,j,k=0
Step 2: if A[i]<B[j] then
copy A[i] to C[k] and increment i and k
else
copy B[j] to C[k] and increment j and k
Step 3: copy remaining elements of either A or B into Array C.

MergeSort Algoritm:
MergeSort(A, lb, ub )
{
If lb<ub
{
mid =
floor(lb+ub)/2;
mergeSort(A, lb,
mid)
mergeSort(A,
mid+1, ub)
merge(A, lb, ub ,
mid)
}
}

Two- Way Merge Sort:


Merge Algorithm:
Step 1: set i,j,k=0
Step 2: if A[i]<B[j] then
copy A[i] to C[k] and increment i and k
else
copy B[j] to C[k] and increment j and k
Step 3: copy remaining elements of either A or B into Array C.

Program :
#include <stdio.h>
// Function to merge two subarrays
void merge(int arr[], int left, int mid, int right) {
int n1 = mid - left + 1;
int n2 = right - mid;
// Temporary arrays
int leftArr[n1], rightArr[n2];
// Copy data into temporary arrays
for (int i = 0; i < n1; i++) leftArr[i] = arr[left + i];
for (int i = 0; i < n2; i++) rightArr[i] = arr[mid + 1 + i];
// Merge the temporary arrays back into the original array
int i = 0, j = 0, k = left;
while (i < n1 && j < n2) {
if (leftArr[i] <= rightArr[j]) arr[k++] = leftArr[i++];
else arr[k++] = rightArr[j++];
}
// Copy remaining elements of leftArr, if any
while (i < n1) arr[k++] = leftArr[i++];
// Copy remaining elements of rightArr, if any
while (j < n2) arr[k++] = rightArr[j++];
}
// Function to implement merge sort
void mergeSort(int arr[], int left, int right) {
if (left < right) {
int mid = left + (right - left) / 2;

mergeSort(arr, left, mid); // Sort the first half


mergeSort(arr, mid + 1, right); // Sort the second half
merge(arr, left, mid, right); // Merge the sorted halves
}
}

// Function to print the array


void printArray(int arr[], int size) {
for (int i = 0; i < size; i++) printf("%d ", arr[i]);
printf("\n");
}
// Main function
int main() {
int arr[] = {38, 27, 43, 3, 9, 82, 10};
int arr_size = sizeof(arr) / sizeof(arr[0]);
printf("Original array: ");
printArray(arr, arr_size);
mergeSort(arr, 0, arr_size - 1);
printf("Sorted array: ");
printArray(arr, arr_size);
return 0;
}
Output:
Original array: 38 27 43 3 9 82 10
Sorted array: 3 9 10 27 38 43 82

Quick Sort:
Quick sort is a fast sorting algorithm used to sort a list of elements. Quick sort algorithm is
invented by C. A. R. Hoare.
The quick sort algorithm attempts to separate the list of elements into two parts and then
sort each part recursively. That means it use divide and conquer strategy. In quick sort, the
partition of the list is performed based on the element called pivot. Here pivot element is one
of the elements in the list.
The list is divided into two partitions such that "all elements to the left of pivot are smaller than the
pivot and all elements to the right of pivot are greater than or equal to the pivot".

Step by Step Process


In Quick sort algorithm, partitioning of the list is performed using following steps...

 Step 1 - Consider the first element of the list as pivot (i.e., Element at first
position in the list).
 Step 2 - Define two variables i and j. Set i and j to first and last elements of the
list respectively.
 Step 3 - Increment i until list[i] > pivot then stop.
 Step 4 - Decrement j until list[j] < pivot then stop.
 Step 5 - If i < j then exchange list[i] and list[j].
 Step 6 - Repeat steps 3,4 & 5 until i > j.
 Step 7 - Exchange the pivot element with list[j] element.
Following is the sample code for Quick sort...

#include<stdio.h>
void quick_sort(int a[],int low,int high);
int main()
{
int i,n,a[30];
printf("enter the size of array:");
scanf("%d",&n);
printf("enter the values of array:");
for(i=0;i<n;i++)
{
scanf("%d",&a[i]);
}
quick_sort(a,0,n-1);
for(i=0;i<n;i++)
{
printf("\n%d",a[i]);
}
}
void quick_sort(int a[30],int low,int high)
{
int i,j,pivot,temp;
if(low<high)
{
pivot=low;
i=low;
j=high;
while(i<j)
{
while(a[i]<a[pivot] && i<high)
i++;
while(a[j]>a[pivot])
j--;
if(i<j)
{
temp=a[i];
a[i]=a[j];
a[j]=temp;
}
}
temp=a[pivot];
a[pivot]=a[j];
a[j]=temp;
quick_sort(a,low,j-1);
quick_sort(a,j+1,high);
}
}
OUTPUT :

enter the size of array:6


enter the values of array:

23
32
54
76
98
65

23
32
54
65
76
98

Example2:
Complexity of the Quick Sort Algorithm
To sort an unsorted list with 'n' number of elements, we need to make ((n-1)+(n-2)+(n-3)+......+1) = (n
(n-1))/2 number of comparisions in the worst case. If the list is already sorted, then it
requires 'n' number of comparisions.
Worst Case : O(n2)
Best Case : O (n log n)
Average Case : O (n log n)

Example2:
Time Complexities All the Searching & Sorting Techniques:

.…………. UNIT-1 NOTES ……………

You might also like