UNIT-5 Notes Data Structures.docx
UNIT-5 Notes Data Structures.docx
The value of K, i.e., 41, is not matched with the first element of the array. So, move to the
next element. And follow the same process until the respective element is found.
Now, the element to be searched is found. So algorithm will return the index of the element
matched.
Linear Search complexity
Now, let's see the time complexity of linear search in the best case, average case, and worst
Now, the element to search is found. So algorithm will return the index of the element
matched.
Binary Search complexity
Now, let's see the time complexity of Binary search in the best case, average case, and worst
case. We will also see the space complexity of Binary search.
1. Time Complexity
o Best Case Complexity - In Binary search, best case occurs when the element to
search is found in first comparison, i.e., when the first middle element itself is the
element to be searched. The best-case time complexity of Binary search is O(1).
o Average Case Complexity - The average case time complexity of Binary search
is O(logn).
o Worst Case Complexity - In Binary search, the worst case occurs, when we have to
keep reducing the search space till it has only one element. The worst-case time
complexity of Binary search is O(logn).
2. Space Complexity
Here, 31 is greater than 12. That means both elements are already in ascending order. So, for
now, 12 is stored in a sorted sub-array.
Now, two elements in the sorted array are 12 and 25. Move forward to the next elements that
are 31 and 8.
Now, the sorted array has three items that are 8, 12 and 25. Move to the next items that are 31
and 32.
Hence, they are already sorted. Now, the sorted array includes 8, 12, 25 and 31.
Move to the next elements that are 32 and 17.
o Best Case Complexity - It occurs when there is no sorting required, i.e. the array is
already sorted. The best-case time complexity of insertion sort is O(n).
o Average Case Complexity - It occurs when the array elements are in jumbled order
that is not properly ascending and not properly descending. The average case time
complexity of insertion sort is O(n2).
o Worst Case Complexity - It occurs when the array elements are required to be sorted
in reverse order. That means suppose you have to sort the array elements in ascending
order, but its elements are in descending order. The worst-case time complexity of
insertion sort is O(n2).
2. Space Complexity
Stable YES
o The space complexity of insertion sort is O(1). It is because, in insertion sort, an extra
variable is required for swapping.
Now, for the first position in the sorted array, the entire array is to be scanned sequentially.
At present, 12 is stored at the first position, after searching the entire array, it is found
that 8 is the smallest value.
So, swap 12 with 8. After the first iteration, 8 will appear at the first position in the sorted
array.
For the second position, where 29 is stored presently, we again sequentially scan the rest of
the items of unsorted array. After scanning, we find that 12 is the second lowest element in
the array that should be appeared at second position.
Now, swap 29 with 12. After the second iteration, 12 will appear at the second position in the
sorted array. So, after two iterations, the two smallest values are placed at the beginning in a
sorted way.
The same process is applied to the rest of the array elements. Now, we are showing a pictorial
representation of the entire sorting process.
o Best Case Complexity - It occurs when there is no sorting required, i.e. the array is
already sorted. The best-case time complexity of selection sort is O(n2).
o Average Case Complexity - It occurs when the array elements are in jumbled order
that is not properly ascending and not properly descending. The average case time
complexity of selection sort is O(n2).
o Worst Case Complexity - It occurs when the array elements are required to be sorted
in reverse order. That means suppose you have to sort the array elements in ascending
order, but its elements are in descending order. The worst-case time complexity of
selection sort is O(n2).
2. Space Complexity
Stable YES
o The space complexity of selection sort is O(1). It is because, in selection sort, an extra
variable is required for swapping.
First Pass
Sorting will start from the initial two elements. Let compare them to check which is greater.
Here, 32 is greater than 13 (32 > 13), so it is already sorted. Now, compare 32 with 26.
Here, 26 is smaller than 36. So, swapping is required. After swapping new array will look
like -
Here, 10 is smaller than 35 that are not sorted. So, swapping is required. Now, we reach at the
end of the array. After first pass, the array will be -
Here, 10 is smaller than 32. So, swapping is required. After swapping, the array will be -
o Best Case Complexity - It occurs when there is no sorting required, i.e. the array is
already sorted. The best-case time complexity of bubble sort is O(n).
o Average Case Complexity - It occurs when the array elements are in jumbled order
that is not properly ascending and not properly descending. The average case time
complexity of bubble sort is O(n2).
o Worst Case Complexity - It occurs when the array elements are required to be sorted
in reverse order. That means suppose you have to sort the array elements in ascending
order, but its elements are in descending order. The worst-case time complexity of
bubble sort is O(n2).
2. Space Complexity
Stable YES
o The space complexity of bubble sort is O(1). It is because, in bubble sort, an extra
variable is required for swapping.
o The space complexity of optimized bubble sort is O(2). It is because two extra
variables are required in optimized bubble sort.
In this article, we will discuss the Quicksort Algorithm. The working procedure of Quicksort
is also simple. This article will be very helpful and interesting to students as they might face
quicksort as a question in their examinations. So, it is important to discuss the topic.
Sorting is a way of arranging items in a systematic manner. Quicksort is the widely used
sorting algorithm that makes n log n comparisons in average case for sorting an array of n
elements. It is a faster and highly efficient sorting algorithm. This algorithm follows the
divide and conquer approach. Divide and conquer is a technique of breaking down the
algorithms into subproblems, then solving the subproblems, and combining the results back
together to solve the original problem.
Divide: In Divide, first pick a pivot element. After that, partition or rearrange the array into
two sub-arrays such that each element in the left sub-array is less than or equal to the pivot
element and each element in the right sub-array is larger than the pivot element.
Quicksort picks an element as pivot, and then it partitions the given array around the picked
pivot element. In quick sort, a large array is divided into two arrays in which one holds values
that are smaller than the specified value (Pivot), and another array holds the values that are
greater than the pivot.
After that, left and right sub-arrays are also partitioned using the same approach. It will
continue until the single element remains in the sub-array.
Picking a good pivot is necessary for the fast implementation of quicksort. However, it is
typical to determine a good pivot. Some of the ways of choosing a pivot are as follows -
o Pivot can be random, i.e. select the random pivot from the given array.
o Pivot can either be the rightmost element of the leftmost element of the given array.
Algorithm
Algorithm:
Partition Algorithm:
To understand the working of quick sort, let's take an unsorted array. It will make the concept
more clear and understandable.
In the given array, we consider the leftmost element as pivot. So, in this case, a[left] = 24,
a[right] = 27 and a[pivot] = 24.
Since, pivot is at left, so algorithm starts from right and move towards left.
Now, a[pivot] < a[right], so algorithm moves forward one position towards left, i.e. -
Because, a[pivot] > a[right], so, algorithm will swap a[pivot] with a[right], and pivot moves
to right, as -
Now, a[left] = 19, a[right] = 24, and a[pivot] = 24. Since, pivot is at right, so algorithm starts
from left and moves to right.
Now, a[left] = 9, a[right] = 24, and a[pivot] = 24. As a[pivot] > a[left], so algorithm moves
one position to right as -
Now, a[left] = 29, a[right] = 24, and a[pivot] = 24. As a[pivot] < a[left], so, swap a[pivot] and
a[left], now pivot is at left, i.e. -
Since, pivot is at left, so algorithm starts from right, and move to left. Now, a[left] = 24,
a[right] = 29, and a[pivot] = 24. As a[pivot] < a[right], so algorithm moves one position to
left, as -
Now, a[pivot] = 24, a[left] = 24, and a[right] = 14. As a[pivot] > a[right], so, swap a[pivot]
and a[right], now pivot is at right, i.e. -
Now, a[pivot] = 24, a[left] = 14, and a[right] = 24. Pivot is at right, so the algorithm starts
from left and move to right.
Now, a[pivot] = 24, a[left] = 24, and a[right] = 24. So, pivot, left and right are pointing the
same element. It represents the termination of procedure.
Element 24, which is the pivot element is placed at its exact position.
Elements that are right side of element 24 are greater than it, and the elements that are left
side of element 24 are smaller than it.
Now, in a similar manner, quick sort algorithm is separately applied to the left and right
sub-arrays. After sorting gets done, the array will be -
Quicksort complexity
Now, let's see the time complexity of quicksort in best case, average case, and in worst case.
We will also see the space complexity of quicksort.
1. Time Complexity
o Best Case Complexity - In Quicksort, the best-case occurs when the pivot element is
the middle element or near to the middle element. The best-case time complexity of
quicksort is O(n*logn).
o Average Case Complexity - It occurs when the array elements are in jumbled order
that is not properly ascending and not properly descending. The average case time
complexity of quicksort is O(n*logn).
o Worst Case Complexity - In quick sort, worst case occurs when the pivot element is
either greatest or smallest element. Suppose, if the pivot element is always the last
element of the array, the worst case would occur when the given array is sorted
already in ascending or descending order. The worst-case time complexity of
quicksort is O(n2).
Though the worst-case complexity of quicksort is more than other sorting algorithms such
as Merge sort and Heap sort, still it is faster in practice. Worst case in quick sort rarely
occurs because by changing the choice of pivot, it can be implemented in different ways.
Worst case in quicksort can be avoided by choosing the right pivot element.
2. Space Complexity
Stable NO
o The space complexity of quicksort is O(n*logn).
In this article, we will discuss the merge sort Algorithm. Merge sort is the sorting technique
that follows the divide and conquer approach. This article will be very helpful and interesting
to students as they might face merge sort as a question in their examinations. In coding or
technical interviews for software engineers, sorting algorithms are widely asked. So, it is
important to discuss the topic.
Merge sort is similar to the quick sort algorithm as it uses the divide and conquer approach to
sort the elements. It is one of the most popular and efficient sorting algorithm. It divides the
given list into two equal halves, calls itself for the two halves and then merges the two sorted
halves. We have to define the merge() function to perform the merging.
The sub-lists are divided again and again into halves until the list cannot be divided further.
Then we combine the pair of one element lists into two-element lists, sorting them in the
process. The sorted two-element pairs is merged into the four-element lists, and so on until
we get the sorted list.
Algorithm
In the following algorithm, arr is the given array, beg is the starting element, and end is the
last element of the array.
The important part of the merge sort is the MERGE function. This function performs the
merging of two sorted sub-arrays that are A[beg…mid] and A[mid+1…end], to build one
sorted array A[beg…end]. So, the inputs of the MERGE function are A[], beg,
mid, and end.
To understand the working of the merge sort algorithm, let's take an unsorted array. It will be
easier to understand the merge sort via an example.
Let the elements of array are -
According to the merge sort, first divide the given array into two equal halves. Merge sort
keeps dividing the list into equal parts until it cannot be further divided.
As there are eight elements in the given array, so it is divided into two arrays of size 4.
Now, again divide these two arrays into halves. As they are of size 4, so divide them into new
arrays of size 2.
Now, again divide these arrays to get the atomic value that cannot be further divided.
In combining, first compare the element of each array and then combine them into another
array in sorted order.
So, first compare 12 and 31, both are in sorted positions. Then compare 25 and 8, and in the
list of two values, put 8 first followed by 25. Then compare 32 and 17, sort them and put 17
first followed by 32. After that, compare 40 and 42, and place them sequentially.
In the next iteration of combining, now compare the arrays with two data values and merge
them into an array of found values in sorted order.
Now, there is a final merging of the arrays. After the final merging of above arrays, the array
will look like -
Now, let's see the time complexity of merge sort in best case, average case, and in worst case.
We will also see the space complexity of the merge sort.
1. Time Complexity
o Best Case Complexity - It occurs when there is no sorting required, i.e. the array is
already sorted. The best-case time complexity of merge sort is O(n*logn).
o Average Case Complexity - It occurs when the array elements are in jumbled order
that is not properly ascending and not properly descending. The average case time
complexity of merge sort is O(n*logn).
o Worst Case Complexity - It occurs when the array elements are required to be sorted
in reverse order. That means suppose you have to sort the array elements in ascending
order, but its elements are in descending order. The worst-case time complexity of
merge sort is O(n*logn).
2. Space Complexity
Stable YES
o The space complexity of merge sort is O(n). It is because, in merge sort, an extra
variable is required for swapping.
Merge Sort is an efficient and easy-to-implement sorting algorithm that uses the
divide-and-conquer approach. It breaks down the problem into smaller sub-problems to
process them individually and then joins them to make one complete sorted list.
The divide step in Merge Sort entails dividing the array in half and recursively sorting each
half. The conquer step involves merging the sorted sub-arrays to get the final sorted array.
The merging stage of Merge Sort, which combines two sorted sub-arrays into a single sorted
array, takes the longest.
Two-way merge Sort merges two sorted lists into one sorted list. In Merge Sort, it is
commonly used to produce the smallest item in each step-given list sorted by length and to
produce a sorted list that contains all the items in any input list proportionate to the total
length of the input lists.
Unlike K-Way Merge Sort, which merges K-sorted lists, Two-Way Merge Sort merges only
two sorted lists.
The Two-Way Merge Sort algorithm divides, sorts, and merges a list to create a complete
sorted list. It selects the smaller element during the merging process.
Here are the steps for the implementation of two-way merge sort:
Merge_Sort() sorts an unsorted array in ascending order. You can modify merge() to sort in
descending order.
Advantages:
1. Efficient for extensive data: Two-way merge sort is very efficient for large datasets.
The time complexity is O(nlogn), which makes it suitable for sorting large datasets.
2. Stable: It is a stable sort, which means that equal elements remain in their relative
positions after sorting.
3. Divide and Conquer: It uses the divide and conquer approach, which makes it easier
to understand and implement.
Disadvantages:
1. Space Complexity: Two-way merge sort requires additional space for the temporary
arrays used during the merging process. This can be a disadvantage when dealing with
large datasets.
2. Not efficient for small arrays: It may not be the most efficient choice for small
arrays or partially sorted arrays.
3. Complexity: The algorithm is more complex compared to simple sorting algorithms
like bubble sort and insertion sort.
Merge sort is efficient with a time complexity of O(n log n), making it ideal for large
datasets. However, it requires extra space for temporary arrays during merging, which can be
a disadvantage when dealing with large datasets.
o Repeatedly delete the root element of the heap formed in 1st phase.
Before knowing more about the heap sort, let's first see a brief description of Heap.
What is a heap?
A heap is a complete binary tree, and the binary tree is a tree in which the node can have the
utmost two children. A complete binary tree is a binary tree in which all the levels except the
last level, i.e., leaf node, should be completely filled, and all the nodes should be
left-justified.
Heapsort is a popular and efficient sorting algorithm. The concept of heap sort is to eliminate
the elements one by one from the heap part of the list, and then insert them into the sorted
part of the list.
Algorithm
1. HeapSort(arr)
2. BuildMaxHeap(arr)
3. for i = length(arr) to 2
4. swap arr[1] with arr[i]
5. heap_size[arr] = heap_size[arr] ? 1
6. MaxHeapify(arr,1)
7. End
BuildMaxHeap(arr)
1. BuildMaxHeap(arr)
2. heap_size(arr) = length(arr)
3. for i = length(arr)/2 to 1
4. MaxHeapify(arr,i)
5. End
MaxHeapify(arr,i)
1. MaxHeapify(arr,i)
2. L = left(i)
3. R = right(i)
4. if L ? heap_size[arr] and arr[L] > arr[i]
5. largest = L
6. else
7. largest = i
8. if R ? heap_size[arr] and arr[R] > arr[largest]
9. largest = R
10. if largest != i
11. swap arr[i] with arr[largest]
12. MaxHeapify(arr,largest)
13. End
In heap sort, basically, there are two phases involved in the sorting of elements. By using the
heap sort algorithm, they are as follows -
o The first step includes the creation of a heap by adjusting the elements of the array.
o After the creation of heap, now remove the root element of the heap repeatedly by
shifting it to the end of the array, and then store the heap structure with the remaining
elements.
Now let's see the working of heap sort in detail by using an example. To understand it more
clearly, let's take an unsorted array and try to sort it using heap sort. It will make the
explanation clearer and easier.
First, we have to construct a heap from the given array and convert it into max heap.
After converting the given heap into max heap, the array elements are -
Next, we have to delete the root element (89) from the max heap. To delete this node, we
have to swap it with the last node, i.e. (11). After deleting the root element, we again have to
heapify it to convert it into max heap.
After swapping the array element 89 with 11, and converting the heap into max-heap, the
elements of array are -
In the next step, again, we have to delete the root element (81) from the max heap. To delete
this node, we have to swap it with the last node, i.e. (54). After deleting the root element, we
again have to heapify it to convert it into max heap.
After swapping the array element 81 with 54 and converting the heap into max-heap, the
elements of array are -
In the next step, we have to delete the root element (76) from the max heap again. To delete
this node, we have to swap it with the last node, i.e. (9). After deleting the root element, we
again have to heapify it to convert it into max heap.
After swapping the array element 76 with 9 and converting the heap into max-heap, the
elements of array are -
In the next step, again we have to delete the root element (54) from the max heap. To delete
this node, we have to swap it with the last node, i.e. (14). After deleting the root element, we
again have to heapify it to convert it into max heap.
After swapping the array element 54 with 14 and converting the heap into max-heap, the
elements of array are -
In the next step, again we have to delete the root element (22) from the max heap. To delete
this node, we have to swap it with the last node, i.e. (11). After deleting the root element, we
again have to heapify it to convert it into max heap.
After swapping the array element 22 with 11 and converting the heap into max-heap, the
elements of array are -
In the next step, again we have to delete the root element (14) from the max heap. To delete
this node, we have to swap it with the last node, i.e. (9). After deleting the root element, we
again have to heapify it to convert it into max heap.
After swapping the array element 14 with 9 and converting the heap into max-heap, the
elements of array are -
In the next step, again we have to delete the root element (11) from the max heap. To delete
this node, we have to swap it with the last node, i.e. (9). After deleting the root element, we
again have to heapify it to convert it into max heap.
After swapping the array element 11 with 9, the elements of array are -
Now, heap has only one element left. After deleting it, heap will be empty.
Now, let's see the time complexity of Heap sort in the best case, average case, and worst case.
We will also see the space complexity of Heapsort.
1. Time Complexity
o Best Case Complexity - It occurs when there is no sorting required, i.e. the array is
already sorted. The best-case time complexity of heap sort is O(n logn).
o Average Case Complexity - It occurs when the array elements are in jumbled order
that is not properly ascending and not properly descending. The average case time
complexity of heap sort is O(n log n).
o Worst Case Complexity - It occurs when the array elements are required to be sorted
in reverse order. That means suppose you have to sort the array elements in ascending
order, but its elements are in descending order. The worst-case time complexity of
heap sort is O(n log n).
The time complexity of heap sort is O(n logn) in all three cases (best case, average case, and
worst case). The height of a complete binary tree having n elements is logn.
2. Space Complexity
Stable N0
Radix sort is the linear sorting algorithm that is used for integers. In Radix sort, there is digit
by digit sorting is performed that is started from the least significant digit to the most
significant digit.
The process of radix sort works similar to the sorting of students names, according to the
alphabetical order. In this case, there are 26 radix formed due to the 26 alphabets in English.
In the first pass, the names of students are grouped according to the ascending order of the
first letter of their names. After that, in the second pass, their names are grouped according to
the ascending order of the second letter of their name. And the process continues until we
find the sorted list.
Algorithm
1. radixSort(arr)
2. max = largest element in the given array
3. d = number of digits in the largest element (or, max)
4. Now, create d buckets of size 0 - 9
5. for i -> 0 to d
6. sort the array elements using counting sort (or any stable sort) according to the digits
at
7. the ith place
The steps used in the sorting of radix sort are listed as follows -
o First, we have to find the largest element (suppose max) from the given array.
Suppose 'x' be the number of digits in max. The 'x' is calculated because we need to
go through the significant places of all elements.
o After that, go through one by one each significant place. Here, we have to use any
stable sorting algorithm to sort the digits of each significant place.
Now let's see the working of radix sort in detail by using an example. To understand it more
clearly, let's take an unsorted array and try to sort it using radix sort. It will make the
explanation clearer and easier.
In the given array, the largest element is 736 that have 3 digits in it. So, the loop will run up
to three times (i.e., to the hundreds place). That means three passes are required to sort the
array.
Now, first sort the elements on the basis of unit place digits (i.e., x = 0). Here, we are using
the counting sort algorithm to sort the elements.
Pass 1:
In the first pass, the list is sorted on the basis of the digits at 0's place.
Pass 2:
In this pass, the list is sorted on the basis of the next significant digits (i.e., digits at
10th place).
Pass 3:
In this pass, the list is sorted on the basis of the next significant digits (i.e., digits at
100th place).
Now, let's see the time complexity of Radix sort in best case, average case, and worst case.
We will also see the space complexity of Radix sort.
1. Time Complexity
o Best Case Complexity - It occurs when there is no sorting required, i.e. the array is
already sorted. The best-case time complexity of Radix sort is Ω(n+k).
o Average Case Complexity - It occurs when the array elements are in jumbled order
that is not properly ascending and not properly descending. The average case time
complexity of Radix sort is θ(nk).
o Worst Case Complexity - It occurs when the array elements are required to be sorted
in reverse order. That means suppose you have to sort the array elements in ascending
order, but its elements are in descending order. The worst-case time complexity of
Radix sort is O(nk).
Radix sort is a non-comparative sorting algorithm that is better than the comparative sorting
algorithms. It has linear time complexity that is better than the comparative algorithms with
complexity O(n logn).
2. Space Complexity
Stable YES
Hash functions are a fundamental concept in computer science and play a crucial role in
various applications such as data storage, retrieval, and cryptography. In data structures and
algorithms (DSA), hash functions are primarily used in hash tables, which are essential for
efficient data management. This article delves into the intricacies of hash functions, their
properties, and the different types of hash functions used in DSA.
A hash function is a function that takes an input (or ‘message’) and returns a fixed-size
string of bytes. The output, typically a number, is called the hash code or hash value. The
main purpose of a hash function is to efficiently map data of arbitrary size to fixed-size
values, which are often used as indexes in hash tables.
● Deterministic: A hash function must consistently produce the same output for the
same input.
● Fixed Output Size: The output of a hash function should have a fixed size, regardless
of the size of the input.
● Uniformity: The hash function should distribute the hash values uniformly across the
output space to avoid clustering.
● Pre-image Resistance: It should be computationally infeasible to reverse the hash
function, i.e., to find the original input given a hash value.
● Collision Resistance: It should be difficult to find two different inputs that produce
the same hash value.
● Hash Tables: The most common use of hash functions in DSA is in hash tables,
which provide an efficient way to store and retrieve data.
● Data Integrity: Hash functions are used to ensure the integrity of data by generating
checksums.
● Data Structures: Hash functions are utilized in various data structures such as Bloom
filters and hash sets.
There are many hash functions that use numeric or alphanumeric keys. This article focuses on
discussing different hash functions:
1. Division Method.
2. Multiplication Method
3. Mid-Square Method
4. Folding Method
6. Universal Hashing
7. Perfect Hashing
1. Division Method
The division method involves dividing the key by a prime number and using the remainder as
the hash value.
h(k)=k mod m
Where k is the key and 𝑚m is a prime number.
Advantages:
● Simple to implement.
Disadvantages:
2. Multiplication Method
In the multiplication method, a constant 𝐴A (0 < A < 1) is used to multiply the key. The
fractional part of the product is then multiplied by 𝑚m to get the hash value.
h(k)=⌊m(kAmod1)⌋
Advantages:
Disadvantages:
3. Mid-Square Method
In the mid-square method, the key is squared, and the middle digits of the result are taken as
the hash value.
Steps:
Advantages:
Disadvantages:
4. Folding Method
The folding method involves dividing the key into equal parts, summing the parts, and then
taking the modulo with respect to 𝑚m.
Steps:
Advantages:
Disadvantages:
Cryptographic hash functions are designed to be secure and are used in cryptography.
Examples include MD5, SHA-1, and SHA-256.
Characteristics:
● Pre-image resistance.
● Collision resistance.
Advantages:
● High security.
Disadvantages:
● Computationally intensive.
6. Universal Hashing
Universal hashing uses a family of hash functions to minimize the chance of collision for any
given set of inputs.
h(k)=((a⋅k+b)modp)modm
Where a and b are randomly chosen constants, p is a prime number greater than m, and k is
the key.
Advantages:
Disadvantages:
7. Perfect Hashing
Perfect hashing aims to create a collision-free hash function for a static set of keys. It
guarantees that no two keys will hash to the same value.
Types:
● Minimal Perfect Hashing: Ensures that the range of the hash function is equal to the
number of keys.
● Non-minimal Perfect Hashing: The range may be larger than the number of keys.
Advantages:
● No collisions.
Disadvantages:
● Complex to construct.
Languages like English or Spanish are not understood by computers, users must
communicate with computers using a set of languages called programming languages. A
computer can be programmed using a variety of language families. Computers are
instruments created to address complicated issues, but only when a programming language
and programmer are used. Computer software is what powers the various browsers, games,
emails, operating systems, and applications. Any problem can be solved creatively through
programming.
Programming Languages
We require a variety of programming languages because no human language can be
understood by computers. Every language has advantages and disadvantages, and some are
more appropriate for a given task than others. Many diverse specialists, including software
developers, computer system engineers, web designers, app developers, etc., require a
programming language to do a variety of jobs; there are many programming languages. More
than 50 programming languages are used to perform different tasks and the most commonly
used languages are HTML, JAVA, and C- language.
Computation Thinking
Using four fundamental patterns, computational thinking is a method for solving any
problem. If we effectively comprehend and use the four fundamental patterns, computational
thinking for programming becomes simple. The first step in effectively understanding an
issue is to break it down into smaller components. We can more effectively use other
computational thinking components when we divide the problems into smaller, more
manageable pieces. The second component in this process is pattern recognition; the
problems are reviewed to see if there is any sequence. If there are any patterns, they have
been categorized appropriately. If no patterns are found, further simplification of that issue is
not necessary. An abstraction or generalization of the issue serves as the third component.
When you stand back from the specifics of a problem, you can develop a more general
answer that can be useful in a number of different ways. The Algorithm, the fourth and final
component, is where problems are incrementally addressed. Making a plan for your solution
is crucial. A method for figuring out step-by-step directions on how to tackle any problem is
to use an algorithm.
The purpose of collision resolution during insertion is to locate an open location in the hash
table when the record’s home position is already taken. Any collision resolution technique
may be thought of as creating a series of hash table slots that may or may not contain the
record. The key will be in its home position in the first position in the sequence. The collision
resolution policy shifts to the following location in the sequence if the home position is
already occupied. Another slot needs to be sought if this is also taken, and so on. The probe
sequence is a collection of slots that is produced by a probe function that we will refer to as p.
This is how insertion operates.
Collision in Hashing
In this, the hash function is used to find the index of the array. The hash value is used to
create an index for the key in the hash table. The hash function may return the same hash
value for two or more keys. When two or more keys have the same hash value, a collision
happens. To handle this collision, we use collision resolution techniques.
Time complexity
● It is easy to implement.
● The hash table never fills full, so we can add more elements to the chain.
Open addressing: To prevent collisions in the hashing table, open addressing is employed as
a collision-resolution technique. No key is kept anywhere else besides the hash table. As a
result, the hash table’s size is never equal to or less than the number of keys. Additionally
known as closed hashing.
● Linear probing
● Quadratic probing
● Double hashing
Linear probing: This involves doing a linear probe for the following slot when a collision
occurs and continuing to do so until an empty slot is discovered.
The worst time to search for an element in linear probing is O. The cache performs best with
linear probing, but clustering is a concern. This method’s key benefit is that it is simple to
calculate.
Quadratic probing: When a collision happens in this, we probe for the i2-nd slot in the
ith iteration, continuing to do so until an empty slot is discovered. In comparison to linear
probing, quadratic probing has a worse cache performance. Additionally, clustering is less of
a concern with quadratic probing.
Double hashing: In this, you employ a different hashing algorithm, and in the ith iteration,
you look for (i * hash 2(x)). The determination of two hash functions requires more time.
Although there is no clustering issue, the performance of the cache is relatively poor when
using double probing.