0% found this document useful (0 votes)
18 views25 pages

CA229 Unit 06

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
18 views25 pages

CA229 Unit 06

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 25

Smt.

Chandaben Mohanbhai Patel Institute of


Computer Applications
CHAROTAR UNIVERSITY OF SCIENCE AND TECHNOLOGY

B.C.A. (Semester-3)
CA229 || Data Structures and Algorithms

Unit-6: Sorting and Searching


• Sorting Notations and Concepts
• Sorting Techniques
• Sequential Searching
• Binary Searching
• Search Trees
• Hash Table Methods.
Searching:
Searching is to find a particular element from the table of data. The search will give either
element found or an element not found. And if the element is found, then the search is
successful otherwise unsuccessful. There are two main search methods.
1. Linear or Sequential search
2. Binary search

Linear Search:
Linear search is a straightforward method. It searches the element in a sequence in the
list. This method can be implemented on a sorted list or an unsorted list. If the data table
is unsorted, then searching starts from the first element and continues until the element
is found or the end of the table is reached.
Example: Input: 59 6 14 10 8 11 12
The above table of data has unsorted elements. If we want to find 10, then we start
comparing 10 from the first element; if it equals, we stop; otherwise, it continues until data
is found or the table ends. The linear search performance can be measured by counting
the comparisons done to find out an element; in this search technique, the number of
search comparisons requires is O(n).

Algorithm of Linear Search Method:


Step 1: [Enter total no. of elements]
n  no
Step 2: [Initialize Array]
For i = 1 to n
a [i]  value
ii+1
Step 3: [Enter searched element]
ELE  Searched_Element
Step 4: [Move loop from i=1 to n]
For i = 1 to n
Repeat step 5
Step 5: [Compare searched item with array element]
If ELE = a[i] Then
Write “Search is successful”
End if
Step 6: Exit

1|Page
Time Complexity:
The linear search's time complexity is O(n) because each element in an array is
compared only once.
Best Case Average Case Worst Case
O(1) O(n) O(n)

Space Complexity:
The space complexity for Linear Search is O(1).

Advantages:
1. Simple – easy to understand
2. Memory-efficient
Disadvantages:
1. The time taken to search for elements keeps increasing as the number of elements
is increased. (Slow)
2. Poor efficiency

Binary Search:
The prerequisite of binary search is data from which we want to search must be in sorted
order. In this method to search for an element, we compare it with the middle element of
the list; if it matches the element which we want to search, then the search will be
successful, and we stop. Otherwise, the list is divided into two halves, one from the first
element to the middle element and the other from the middle element to the last element.
As a result, all the first half elements are smaller than the middle element, while all the
elements in the second half are greater than the center elements.

The search will process either of the two halves depending on whether the target element
is greater or smaller than the center element. If the element is smaller than the middle
element, that process will be half otherwise for the second half.

Why Do We Need Binary Search?


• Binary search works efficiently on sorted data, no matter the size of the data.
• Instead of performing the search by going through the data in a sequence, the
binary algorithm randomly accesses the data to find the required element. This

2|Page
makes the search cycles shorter and more accurate.
• Binary search performs comparisons of the sorted data based on an ordering
principle than using equality comparisons, which are slower and mostly
inaccurate.
• After every cycle of search, the algorithm divides the array size into half; hence, in
the next iteration, it will work only in the remaining half of the array.

Algorithm of Binary Search Method:


Step 1: [Enter total no. of elements]
n  no
Step 2: [Initialize Array]
For i = 1 to n
a[i]  value
ii+1
Step 3: [Enter searched element]
ELE  searched_element
Step 4: [Find middle element]
LOW  0
HIGH  n-1
MID  (LOW + HIGH) / 2
Step 5: Repeat step 6
while ELE != a [ MID ] and LOW <= HIGH
Step 6: [Compare element]
If ELE < a[MID] Then
HIGH MID - 1
Else
If ELE > a[MID] Then
LOW  MID + 1
End If
End If
Step 7: [Search is found]
If ELE = a[MID] Then
Write “Search is successful”
End if
Step 8: Exit

3|Page
Time Complexity:
Best Case Average Case Worst Case
O(1) O(Log2n) O(Log2n)
Space Complexity:
The space complexity for Binary Search is O(n).

Advantages:
1. One of the main advantages of a binary search is that it is much quicker than a
serial search because the data needs to be searched in halves with each step.
2. It is useful when there are large numbers of elements in an array.
Disadvantages:
1. The major limitation of binary search is that there is a need for the sorted array to
perform the binary search operation.

Sorting:
Here we learn the following sorting technique.
1. Bubble sort
2. Selection sort
3. Quicksort
4. Insertion sort
5. Merge sort
6. Heapsort
7. Address calculation sort
8. Radix sort

Bubble Sort:
In Bubble sort, if we want to arrange in ascending order, then we compare the first
element with the second; if the first element is greater than a second, they are
interchanged. Now we compare the second element with the third element and so on. It
will be processed up to the n-1 element. After completing the first pass, the largest
element will be placed in the last position, in which the second largest element will be
placed in the second last position, and so on. After all the passes, the list of data will be
sorted.

4|Page
Example: Data: 55 44 22 33 11
Pass 1:
55 44 44 44 44
After Pass 1, the new array
44 55 22 22 22 should look like this:
22 22 55 33 33
44 22 33 11 55
33 33 33 55 11
11 11 11 11 55

Pass 2:
44 22 22 22
After Pass 2, the new array
22 44 33 33 should look like this:
33 33 44 11 22 33 11 44 5
11 11 11 44
55 55 55 55

Pass 3:
22 22 22
After Pass 3, the new array
33 33 11 should look like this:
11 11 33
22 11 33 44 55
44 44 44
55 55 55

Pass 4:
22 11
After Pass 4, the new array
11 22 should look like this:
33 33
11 22 33 44 55
44 44
55 55

Pass 5:
11
22
33
44
55

5|Page
Here we can see that the sorting technique is performed from bottom to up. And the
smallest element is going towards the top in each pass.
Algorithm: BUBBLE SORT (A, N)
Step 1: [Start from element 1 to n-1 element]
For i = 1 to n
Repeat steps 2 and 3
ii+1
Step 2: [Start from element 1 to n-1 element]
For j = 1 to n-1
Repeat step 3
jj+1
Step 3: [Compare a[j] with a[j+1]]
If a[j] > a[j+1] Then
swap (a[j] , a[j+1])
End if
Step 4: Exit

Time Complexity:
In Bubble sort total comparison in the first pass is n-1. In the second pass, it is n-2, and so
on. The best can of bubble sort is in which only one iteration is required. The number of
iterations required is n-1. The worst case is when the given array is arranged in reverse

order. In this case, the total iteration requires n(n – 1)/2, which is O(n2). In an average case,
one must find the expected iteration first;in this case, a different iteration must check that
the sorting is completed.
Space Complexity:
The space complexity for Bubble Sort is O(1) because only a single additional memory
space is required.
Advantages:
1. Simple algorithm
2. Easy to implement
Disadvantages:
1. It is found and easy to use for small sets, but the more items being sorted, the less
efficient it is compared to alternative methods of sorting.
2. The average time increases almost exponentially as the number of table elements
increases.

6|Page
Selection Sort:
In the selection sort technique, in the first iteration, the first element is compared with
the smallest elements from the remaining elements. If this first element is greater than
the smallest element, then they are interchanged. In the first iteration, the smallest
element will be in the first position, while the rest of the array is unsorted; in the second
iteration, the smallest element will be in the second position, and so on.
Example: Data: 77 33 44 11 88 22 66 55
Pass
K=1 & LOC=4 77 33 44 11 88 22 66 55
K=2 & LOC=6 11 33 44 77 88 22 66 55
K=3 & LOC=6 11 22 44 77 88 33 66 55
K=4 & LOC=6 11 22 33 77 88 44 66 55
K=5 & LOC=8 11 22 33 44 88 77 66 55
K=6 & LOC=7 11 22 33 44 55 77 66 88
K=7 & LOC=I 11 22 33 44 55 66 77 88
Sorted Order 11 22 33 44 55 66 77 88

Algorithm: SELECTION SORT(A, N)


Step 1: [Loop through first (n-1) Elements]
For i=1 to n
MinIndex  i
Step 2: [Loop through i + 1 to n Elements]
For j = i + 1 to n
If k[j] <k[MinIndex] Then
MinIndex  j
End if
End for
Step 3: [Swap Element]
If i ≠ MinIndex Then
temp  k[i]
K[i]  k[MinIndex]
K[MinIndex]  temp
End if
Step 4: Exit

7|Page
• Set min to the first location
• Search the minimum element in the array
• Swap the first location with the minimum value in the array
• Assign the second element as min.
• Repeat the process until we get a sorted array.

Time Complexity:
The selection sort algorithm searches the smallest elements in the table and then puts the
element in a proper position. In pass 1, it will compare the n-1 element, in pass 2, the n-2
element is compared, and so on. The selection sort does not see the order of the element
in the table; that is why complexity is the same for all cases. Also, we can analyze the
complexity by simply observing the number of loops. There are two loops, so the

complexity is n * n = n2

Best Case Average Case Worst Case


O(n2) O(n2) O(n2)

Space Complexity:
The space complexity for the Selection sort is O(1) because one extra variable is used.

Advantages:
1. Every pass element will be at its proper position.
2. Few temporary variables are required.
3. Simple algorithm.
4. Selection sort uses the minimum number of swap operations O(n) among all the
sorting algorithms.
Disadvantages:
1. The selection sort is not a very efficient algorithm when data sets are large.

Insertion Sort:
In Insertion sort, we have to present a particular element at the appropriate position. In
the first iteration, the second element is compared with the first element; in the second

8|Page
iteration, the third element is compared with the first and second elements, and so on. If
at any point, it is found that an element can be presented at a position, then space is
created for it by shifting the other element from one position to the right, and the element
is inserted at a suitable position.

Example: Data: 77 33 44 11 88 22 66 55
Pass

K=1 77 33 44 11 88 22 66 55

K=2 77 33 44 11 88 22 66 55

K=3 33 77 44 11 88 22 66 55

K=4 33 44 77 11 88 22 66 55

K=5 11 33 44 77 88 22 66 55

K=6 11 33 44 77 88 22 66 55

K=7 11 22 33 44 77 88 66 55

K=8 11 22 33 44 66 77 88 55

Sorted Order 11 22 33 44 55 66 77 88

Algorithm:
INSERTION SORT (A, N)
Step 1: For i = 2 to n
Repeat Step 2
ii+1
Step 2: For j = 1 to i
Repeat Step 3
jj+1
Step 3: [Swap the element]
If a[j] < a[j-1] Then
swap (a[j] , a[j–1])
Else
break
End If
Step 4: Exit

9|Page
Time Complexity:
In Insertion, sort an element before or after the element, so in the first Insertion, no
comparison will be made; in the third iteration, two components will be made, and so on.

So, in the worst case where all. Elements are in reverse order; the complexity is O(n2).
Suppose the table is sorted, then only one comparison is made in each iteration. So the
sort is O(n). The speed of sorting can be improved by applying a binary search to find the
position of the element.

Space Complexity:
Space complexity for Insertion sort is O(1) because one extra variable is used.

Advantage:
1. Simplicity
2. Efficient where several elements to be sorted are very less.
3. It processes very fast where the table is near about sorted.
Disadvantage:
1. It is less efficient on a list containing more elements. As the number of elements
increases, the performance of the program would be slow. This sort needs a large
number of element shifts.

Quick Sort:
Quicksort is an efficient method for internal sorting. It is trendy and introduced by Haore
in 1962. The idea behind the quick sort is that sorting is much more comfortable in two
shortlists rather than one list. Is dividing and conquering method means dividing the big
table into two small tables and then those two small tables into the other two small tables.

In quick sort, the original table is divided into two subtables such that all the elements on
the left side are smaller than the key element or pivot element. The key element is in its
proper position. In the same way, we find key or pivot elements for the sub-tables and
divide them into the Other two subtables. This process is continued until all elements are
sorted.

The pivot (key) element will be placed at its proper position by the following process.

10 | P a g e
Choose the first element as a key(pivot). Compare this with all elements from right to left
and find the element less than a key, interchange them. Now, the comparison starts from
the interchanged position, from left to right, and finds the higher element than the pivot
value. Repeat this process until the pivot is at its position.

Example:
Data: 44 33 11 55 77 90 40 60 99 22 88 66
44 33 11 55 77 90 40 60 99 22 88 66
22 33 11 55 77 90 40 60 99 44 88 66
22 33 11 44 77 90 40 60 99 55 88 66
22 33 11 40 77 90 44 60 99 55 88 66
22 33 11 40 44 90 77 60 99 55 88 66
11 33 22 40 44 90 77 60 99 55 88 66
11 22 33 40 44 90 77 60 99 55 88 66
11 22 33 40 44 66 77 60 99 55 88 90
11 22 33 40 44 66 77 60 90 55 88 99
11 22 33 40 44 66 77 60 88 55 90 99
11 22 33 40 44 55 77 60 80 66 90 99
11 22 33 40 44 55 66 60 80 77 90 99
11 22 33 40 44 55 60 66 80 77 90 99
11 22 33 40 44 55 60 66 77 80 90 99

Algorithm: QUICK SORT (A,N)


Step 1: [Initialize]
Flag  true
Step 2: [Perform the sort]
If LB < UB Then
i  LB + 1
j  UB
Key  K[B]
Repeat while the flag
Repeat while k[i] >key

11 | P a g e
ii+1
Repeat while K[j] < Key
jj+1
If i < j Then
K[i] → K[j]
Else
Flag  False
K[LB]  K[j]
End If
Call Quicksort (K, LB, j-1) (K, j+1, UB)
Step 3: Exit

Time Complexity:
The best-case complexity is O(n log n). The time required to perform a quick sort depends
on the position of the pivot element in the list. In the average case, we assume that the list

is equally divided, so the total number of elements at a particular level is 2i-1. So, the

incremental steps are log2n. In the worst-case complexity is O(n2).


Space Complexity:
Space complexity for Quicksort is O(n) because we need one to order the number of
elements n of space allocation.
Advantage:
1. The faster method
2. if the number of elements is large than the best method.
3. It requires no additional memory.
Disadvantages:
1. Its running time can differ depending on the contents of the array.
2. It is recursive. Especially if recursion is not available, the implementation is too
complicated.
3. It is not stable.

12 | P a g e
Merge Sort:
Merge means combining two or more two sorted lists and making one sorted list. The
process of merge sort is described below.
1. Compare two sorted lists.
2. The smaller element of all the elements is sorted in the third array.
3. Sorting will be complete when all elements from both lists are in the third list.
4. Let us explore one example for merge sort,
Example:
Set A: 10 12 18 20 22
Set B: 5 8 9 13 19

Output: 5 8 9 10 12 13 18 19 20 22

Algorithm: MERGE SORT(A,N,B,M,C)


Step 1: [Initialize variable]
i0
j0
k0
Step 2: Repeat step 3 while
i < m and j < n
Step 3: [Compare elements of both arrays]
If a[i] < b[j] Then
c[k]  a[i]
ii+1
kk+1
Else
c [k]  b[j]
jj+1
kk+1
End If
Step 4: if i = m Then
While (j<n)
c[k]  b[j]

13 | P a g e
kk+1
jj+1
End While
End If
Step 5: if j = n Then
While (i < m)
c[k]  a[i]
kk+1
ii+1
End While
End If
Step 6: Exit

Time Complexity:
Passes are log2n more than n*log2n complexity, but it requires O(n) additional space for
auxiliary memory. At worst, the best or average-case complexity is O(n log n) for merge
sort.
Space Complexity:
Space complexity for merge sort can be O(n) in an implementation using arrays; this
means that this algorithm takes much space and may slow down operations for the large
data set and O(1) in linked list implementations.
Advantages:
1. It is quicker for more extensive lists because unlike Insertion and bubble sort; it
does not go through the whole list several times.
Disadvantages:
1. Use more memory space to store the sub-elements of the initial split list. (At least
twice the memory requirements than other sorts).

Heap Sort:
To understand heap sort first, we have to understand the heap tree. A heap is a special
kind of tree where the data of an array is stored in a tree. Where for node n, its left child is
2n, and the right child is 2n+1. Heap is a complete binary tree. There are two types of heap
trees.

14 | P a g e
1. Max heap tree
2. Min heap tree.

In a max heap tree or descending heap, every node of the heap has a value greater than
equal to every child of that node. So in a max heap, the root is the largest element of the
table. In the same way, a heap is known as a min or ascending heap if every node of the
heap has a value less or equal to the value of every child of that node.
To insert a node in a heap, follow the rules below.
1. Add the node at the end of the array.
2. Keep it in the proper place in the heap tree.

For the second rule, we have to compare that node with its parent. Suppose it is greater
than its parent then interchanges it with the parent. Again do the same process with the
new parent until the condition does not satisfy. Let us see an example of creating a heap
tree.
0 1 2 3 4 5 6 7 8 9
18 25 38 12 8 22 48 39 72 36
Step 1: The first 18 will be the root of the tree.

Step 2: 25 is at position 1 in the array so it will be on the left of the 18.

25 is greater than 18; then,


we will interchange them.

Step 3: Insert 38 at the right of 25.

38 is greater than
25, so
interchange them.

15 | P a g e
Step 4: Insert 12 at the proper place.

Step 5: Insert 8 at the proper place.

Step 6: Insert 22 in the proper place.

Step 7: Insert 48 at the proper place.

16 | P a g e
Step 8: Now insert 39 at the proper place, which is greater than 12, and 18 is replaced
one by one.

Step 9: Insert 72, which is larger than 18,39, and 48, so interchange them one by
one.

17 | P a g e
Interchange
72 and 18

Interchange

Step 10: At last, insert 36, which is greater than 8, so interchange them. It is the heap
tree where the entire parent has a value greater than their all children.

Interchange

Time Complexity:
In heap sort first, we have to create a heap tree and then place the proper node position
in every step. Once the most considerable time in O(n log n), the tree's depth is log2n. In

18 | P a g e
the worst case, it behaves as O(n log n). If data is in a small amount, this sorting technique
is not preferred because extra time requires heap creation.

Space Complexity:
The heapsort algorithm uses 0(1) memory space for the sorting operation.

Advantages:
1. The heapsort algorithm is a simple, fast, and stable sorting algorithm that can be used
to sort large data sets.
2. The Heap sort algorithm is widely used because of its efficiency.
3. The Heap sort algorithm can be implemented as an in-place sorting algorithm. It
means that its memory usage is minimal because apart from what is necessary to hold
the initial list of items to be sorted, it needs no additional memory space to work.
4. The Heap sort algorithm is more straightforward to understand than other equally
efficient sorting algorithms. Because it does not use advanced computer science
concepts such as recursion, it is also easier for programmers to implement correctly.
5. The Heap sort algorithm exhibits consistent performance. This means it performs
equally well in the best, average, and worst cases. Because of its guaranteed
performance, it is particularly suitable to use in systems with a critical response time.

Disadvantages:
1. The heapsort algorithm's worst-case comes with a running time of 0(n log (n)),
which is more likely to merge with the sort algorithm.

Radix Sort:
In Radix sort, there will be 10 buckets of digits 0 to 9. In the first pass, the unit digit will
be placed according to the bucket; in the second pass, the base will give a ten-digit, and so
on. If there is more than one digit in a bucket, they will be counted from bottom to up.

19 | P a g e
Example: Data: 348 143 361 423 538 128 321 543 366
Input 0 1 2 3 4 5 6 7 8 9
348 348
143 143
361 361
423 423
538 538
128 128
321 321
543 543
366 366
So, the elements in the array will be 361 321 143 426 543 366 348 538 128
Input 0 1 2 3 4 5 6 7 8 9
361 361
321 321
143 143
423 423
543 543
366 366
348 348
538 538
128 128
So the elements in the array will be 321 423 128 538 143 543 348 361 366
Input 0 1 2 3 4 5 6 7 8 9
321 321
423 423
128 128
538 538
143 143
543 543
348 348
361 361
366 366
So the elements in the array will be, 128 143 321 348 361 366 426 538 543

20 | P a g e
Algorithm:
Step 1: [Perform the sort]
Repeat through Step 4:
for j = 1, 3, …n
Step 2: [Initialize]
Repeat for i = 0 ,1 ,2 … 9
T[i]  b[i]  null
Step 3: [Distribute each in the appropriate pocket.]
Repeat while R ≠ null
D  b[j] (Obtain jth digit of the key)
Next  link(R}
If T[D] = null Then
T[D]  B[D]  R
Else
Link(T[D])  R
T[D]  R
Link(R)  null
R  next
End if
End While
Step 4: [Combine the pockets]
p0
Repeat while B[P] = null
pp+1
First  B[P]
Repeat for i = p +1, p+2,….q
Prev  T[i-1]
If T[i] ≠ null Then
Link(prev)  B[I]
Else
T[i]  prev
End if
End While
Step 5: Exit

21 | P a g e
Time Complexity:
In the radix sort number of passes are the max digits in all elements, which means the
largest number’s total digit. In the worst case, the maximum number of comparisons is

O(n2). While in the best case, the maximum number of comparisons is O(n log n). The
radix sort is depending on three things, that is the number of buckets required. The
second is the number of digits of the largest element and the size of the array. The
disadvantage of radix sort is the extra space required.

Space Complexity:
The space complexity for Radix Sort is O (n+k).

Advantages:
1. Fast when the keys are short, i.e., when the range of the array elements is less.
2. The Radix sort algorithm is well known for its fastest sorting algorithm for
numbers and even for letters.
3. The Radix sort algorithm is the most efficient algorithm for arranging elements in
descending order in an array.
Disadvantages:
1. Since Radix Sort depends on digits or letters, Radix Sort is much less flexible than
other sorts. Hence, every different type of data needs to be rewritten.
2. The constant for Radix sort is more significant compared to other sorting
algorithms.
3. It takes more space compared to a Quicksort.
4. Low efficiency for most elements that are already arranged in ascending order in
an array.
5. When the Radix sort algorithm is applied to minimal sets of data (numbers or
strings), the algorithm runs in 0(n) asymptotic time.

Address Calculation Sort:


This can be one of the fastest types of distributive sorting techniques if enough space is
available, also called Hashing. In this algorithm, a hash function is used and applied to
each element in the list. The result of the hash function is placed into an address in the
table that represents the key. Linked lists are used as an address table for storing keys.

22 | P a g e
The hash function places the elements in linked lists called subfiles. An item is placed into
a sub-file in the correct sequence by using any sorting method. After all the elements are
placed into sub files, the lists are concatenated to produce the sorted list.
Example:
Consider the followingsimple file. 45 23 78 20 55
• Let us create a sub-file, one for each of the possible first digits. Initially, each of the
sub-file is empty.
• An array of 10 pointers is declared where the f[i] pointer points to the first element
in a file whose first digit isi.
• After scanning, the first element 45 is placed into a file pointed by f[4].
• Each of this sub-file is maintained as a sorted linked list of an original array
element.
• After processing each of the elements in the original file, the subfile appears as
follows.
F(0) → NULL

F(1) → NULL

F(2) →

F(3) → NULL

F(4) →

F(5) →

F(6) → NULL

F(7) →

F(8) → NULL

F(9) → NULL

Procedure:
Step 1: In this method, a hash function f is applied to each key.
Step 2: The result of this function determines which of the in several sub-files,

23 | P a g e
the record is to be placed. After all the original file items have been
placed into the subfield, the sub-file may be concatenated to produce
the sorted result.
The function should have the property that: if x<=y,
f(x)<=f(y), Such a function is called order-preserving.
Step 3: An item is placed into a sub-file in the correct sequence
by using any sorting method, simple Insertion is often used.

Time Complexity:
The time complexity of this algorithm is O(n) in the best case. This happens when the
values in the array are uniformly distributed within a specific range. At the same time, the

worst-case time complexity is O(n2). This happens when most of the values occupy 1 or 2
addresses because then significant work is required to insert each value at its proper place.

Advantages:
1. Stable and fast.
2. They are used in exceptional cases when the key can be used to calculate the
address of buckets.

Comparison of different sorting methods:


Sorting Method Best Case Average Case Worst Case

Bubble sort O(n) O(n2) O(n2)

Selection sort O(n2) O(n2) O(n2)

Insertion sort O(n) O(n2) O(n2)


Merge sort O(n log n) O(n log n) O(n log n)

Heapsort O(n log n) O(n log n) O(n log n)

Quicksort O(n log n) O(n log n) O(n2)

Radix sort O(n log n) O(n2)

Address Calculation sort O(n log n) O(n2)

24 | P a g e

You might also like