0% found this document useful (0 votes)
2 views37 pages

03 Searching Algorithms

The document discusses searching algorithms, focusing on Linear Search and Binary Search. It details the Linear Search algorithm's efficiency, analyzing the number of element comparisons based on the position of the searched element, and introduces Binary Search as a more efficient alternative using a divide-and-conquer approach. Additionally, it covers Binary Search Trees, their properties, and operations such as searching, insertion, and deletion.

Uploaded by

abdohoal
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views37 pages

03 Searching Algorithms

The document discusses searching algorithms, focusing on Linear Search and Binary Search. It details the Linear Search algorithm's efficiency, analyzing the number of element comparisons based on the position of the searched element, and introduces Binary Search as a more efficient alternative using a divide-and-conquer approach. Additionally, it covers Binary Search Trees, their properties, and operations such as searching, insertion, and deletion.

Uploaded by

abdohoal
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 37

CS2315

Algorithm Fundamentals
Searching Algorithms

Lectures prepared by: Dr. Manal Alharbi, and Dr. Areej Alsini
Review: Searching Problem
• Assume A is an array with n elements A[1], A[2], … A[n]. For a
given element x, we must determine whether there is an index j;
1 ≤ j ≤ n, such that x = A[j]

• Two algorithms, among others, address this problem


• Linear Search
• Binary tree: Binary Search
Linear Search Algorithm
Algorithm: LINEARSEARCH

Input: array A[1..n] of n elements and an element x.

Output: j if x = A[j], 1 ≤ j ≤ n, and 0 otherwise.

1. j = 1
2. while (j ≤ n) and (x≠ A[j]) do
3. j = j+ 1
4. end while
5. if x=A[j] then return j else return 0
Analyzing Linear Search
• One way to measure efficiency is to count how many statements get executed before the
algorithm terminates

• One should keep an eye, though, on statements that are executed “repeatedly”.

• What will be the number of “element” comparisons if 𝑥


• First appears in the first element of 𝐴
• First appears in the middle element of 𝐴
• First appears in the last element of 𝐴
• Doesn’t appear in 𝐴.
Analyzing Linear Search
• One way to measure efficiency is to count how many statements get executed before the
algorithm terminates

• One should keep an eye, though, on statements that are executed “repeatedly”.

• What will be the number of “element” comparisons if 𝑥


• First appears in the first element of 𝐴 1 comparison
𝑛
• First appears in the middle element of 𝐴 comparisons
2

• First appears in the last element of 𝐴 𝑛 comparisons


• Doesn’t appear in 𝐴. 𝑛 comparisons
Analyzing Linear Search
We are interested here in finding the number of element comparisons .In the algorithm, element
comparisons are done in the while statement (Step 2) of the algorithm, that is
2. while (j ≤ n) and (x≠ A[j])

The minimum number of comparisons occurs when the element we are looking for is the first element in
the array. In this case, there is only one element comparison.

The maximum number of element comparisons occurs when the element is not among the elements of
the array or in the last index. This results in n comparisons.

Thus, the number of element comparisons in the linear search algorithm is between 1 and n inclusive.
Linear Search Algorithm
Algorithm: LINEARSEARCH
Input: array A[1..n] of n elements and an element x.
Output: j if x = A[j], 1 ≤ j ≤ n, and 0 otherwise.

Algorithm Cost Time


1. j=1 𝑐1 1
2. while (j ≤ n) and (x≠ A[j]) do 𝑐2 𝐾
4. j = j+ 1 𝑐3 𝐾−1
5. end while
𝑐6
6. if x=A[j] then 1
𝑐7
7. return j 1
𝑐8
8. else return 0 1

• 𝐾 ranges between 1 and 𝑛.


• Worst-case time = 𝑛 + 𝑛 − 1 = 2𝑛 − 1
• Total time complexity = 𝑶 𝟐𝒏 − 𝟏 = 𝑶(𝒏)
Can we do better? Review: binary tree- concept

• Yes, using binary search.

• Algorithms Design: Divide-and-conquer


• Let 𝜋 be any problem with 𝜋 = n. To solve using divide-and-conquer the following steps
are involved:
• Generate k subproblems form 𝜋 (for some k ≥ 1).
• Let these subproblems be 𝜋1, 𝜋2,……, 𝜋k;
• for i = 1 to k do
• Recursively (or otherwise) solve 𝜋i;
• Combine the solutions obtained in step 2 to create a solution for 𝜋.

• A classical example of divide-and-conquer is binary search. For this problem, the input are a sorted
sequence X = k1, k2,…….,kn and another element x. The problem is to check if x ∈ X.
Binary Search Trees- Review
• Tree representation:
• A linked data structure in which each node is an object
parent
• Node representation: L R
key data
• Key field
• Left: pointer to left child
• Right: pointer to right child Right child
Left child
• p: pointer to parent (p [root [T]] = NIL)

• Satisfies the binary-search-tree property !!


Binary Search Tree Property
• Binary search tree property:
• If y is in left subtree of x,
5
then key [y] ≤ key [x]
3 7
• If y is in right subtree of x, 2 5 9
then key [y] ≥ key [x]
Binary Search Trees
• Support many dynamic set operations
• SEARCH, MINIMUM, MAXIMUM, PREDECESSOR, SUCCESSOR,
INSERT, DELETE

• Running time of basic operations on binary search trees


• On average: (logn)
• The expected height of the tree is logn

• In the worst case: (n)


• The tree is a linear chain of n nodes
Worst Case
Traversing a Binary Search Tree
• Inorder tree walk:
• Root is printed between the values of its left and right subtrees left, root, right
• Keys are printed in sorted order

Example:
• Preorder tree walk:
Inorder: 2 3 5 5 7 9
• root printed first: root, left, right 5

3 7
Preorder: 5 3 2 5 7 9
• Postorder tree walk:
2 5 9
• root printed last: left, right, root Postorder: 2 5 3 9 7 5
Traversing a Binary Search Tree
Alg: INORDER-TREE-WALK(x)
1. if x  NIL
2. then INORDER-TREE-WALK ( left [x] )
3. print key [x]
4. INORDER-TREE-WALK ( right [x] )

5
• E.g.:
3 7
Output: 2 3 5 5 7 9
2 5 9
• Running time:
• (n), where n is the size of the tree rooted at x
Searching for a Key
5
• Given a pointer to the root of a tree and a key k:
3 7
• Return a pointer to a node with key k
2 4 9
if one exists
• Otherwise return NIL

• Idea
• Starting at the root: trace down a path by comparing k with the key of the current node:
• If the keys are equal: we have found the key
• If k < key[x] search in the left subtree of x
• If k > key[x] search in the right subtree of x
Example: TREE-SEARCH

15
• Search for key 13:
6 18 • 15 → 6 → 7 → 13
3 7 17 20
2 4 13
9

16
Searching for a Key
5

Alg: TREE-SEARCH(x, k) 3 7

1. if x = NIL or k = key [x]


2 4 9

2. then return x
3. if k < key [x]
4. then return TREE-SEARCH(left [x], k )
5. else return TREE-SEARCH(right [x], k )
Running Time: O (h),
h – the height of the tree
Finding the Minimum
in a Binary Search Tree
• Goal: find the minimum value in a BST
• Following left child pointers from the root, until a NIL is encountered
Alg: TREE-MINIMUM(x)
1. while left [x]  NIL 15

2. do x ← left [x] 6 18
3. return x 3 7 17 20
2 4 13

Running time: O(h), h – height of tree


9

Minimum = 2
Finding the Maximum
in a Binary Search Tree
• Goal: find the maximum value in a BST
• Following right child pointers from the root, until a NIL is encountered
Alg: TREE-MAXIMUM(x)
1. while right [x]  NIL
2. do x ← right [x] 15

3. return x 6 18
3 7 17 20

• Running time: O(h), h – height of tree 2 4 13


9

Maximum = 20
Successor
Def: successor (x ) = y, such that key [y] is the
smallest key > key [x]
• E.g.: successor (15) = 17
successor (13) = 15 15
successor (9) = 13
6 18
• Case 1: right (x) is nonempty 3 7 17 y 20
x
• successor (x ) = the minimum in right (x) 2 4 13
• Case 2: right (x) is empty 9
• go up the tree until the current node is a left child:
successor (x ) is the parent of the current node
• if you cannot go further (and you reached the root):
x is the largest element
Finding the Successor
Alg: TREE-SUCCESSOR(x)
1. if right [x]  NIL
2. then return TREE-MINIMUM(right [x]) 15

3. y ← p[x] y
6 18
4. while y  NIL and x = right [y]
3 7 17 20
5. do x ← y x
2 4 13
6. y ← p[y]
9
7. return y

Running time: O (h), h – height of the tree


Predecessor
Def: predecessor (x ) = y, such that key [y] is the
biggest key < key [x]
• E.g.: predecessor (15) = 13 15
y
predecessor (9) = 7 x
6 18
predecessor (7) = 6
3 7 17 20

• Case 1: left (x) is nonempty 2 4 13

• predecessor (x ) = the maximum in left (x) 9

• Case 2: left (x) is empty


• go up the tree until the current node is a right child:
predecessor (x ) is the parent of the current node
• if you cannot go further (and you reached the root):
x is the smallest element
Insertion
• Goal:
• Insert value v into a binary search tree

• Idea:
• If key [x] < v move to the right child of x,
Insert value 13
else move to the left child of x
12
• When x is NIL, we found the correct position
• If v < key [y] insert the new node as y’s left child 5 18

else insert it as y’s right child 2 9 15 19


1 3 13 17

• Begining at the root, go down the tree and maintain:


• Pointer x : traces the downward path (current node)
• Pointer y : parent of x (“trailing pointer” )
Example: TREE-INSERT
x=root[T], y=NIL y
Insert 13: 12 12
x
5 18 5 18
2 9 15 19 2 9 15 19
1 3 17 1 3 17

12 12

x y
5 18 5 18
2 9 15 19 2 9 15 19
1 3 17 1 3 13 17
x = NIL
y = 15
Alg: TREE-INSERT(T, z)
1. y ← NIL
2. x ← root [T]
3. while x ≠ NIL 12

4. do y ← x
5 18
5. if key [z] < key [x]
2 9 15 19
6. then x ← left [x]
1 3 13 17
7. else x ← right [x]
8. p[z] ← y
9. if y = NIL
10. then root [T] ← z Tree T was empty
11. else if key [z] < key [y]
12. then left [y] ← z
13. else right [y] ← z Running time: O(h)
Deletion
• Goal:
• Delete a given node z from a binary search tree
• Idea:
• Case 1: z has no children
• Delete z by making the parent of z point to NIL

15 15

5 16 5 16

3 12 20 3 12 20
z
10 13 18 23 10 18 23

6 delete 6

7 7
Deletion
• Case 2: z has one child
• Delete z by making the parent of z point to z’s child,
instead of to z

15 delete 15
z
5 16 5 20

3 12 20 3 12 18 23

10 13 18 23 10

6 6

7 7
Deletion
• Case 3: z has two children
• z’s successor (y) is the minimum node in z’s right subtree
• y has either no children or one right child (but no left child)
• Delete y from the tree (via Case 1 or 2)
• Replace z’s key and satellite data with y’s.

6 15 15
delete z
5 16 6 16

3 12 20 3 12 20

10 13 18 23 10 13 18 23

y 6 7

7
TREE-DELETE(T, z)
1. if left[z] = NIL or right[z] = NIL
2. then y ← z z has one child

3. else y ← TREE-SUCCESSOR(z) z has 2 children

4. if left[y]  NIL 15 y

5. then x ← left[y] 5 16
x

else x ← right[y]
3 12 20
6.
10 13 18 23
7. if x  NIL
6
8. then p[x] ← p[y] 7
TREE-DELETE(T, z) – cont.
9. if p[y] = NIL 15 y

10. then root[T] ← x 5 16


x

else if y = left[p[y]]
3 12 20
11.
10 13 18 23
12. then left[p[y]] ← x
6
13. else right[p[y]] ← x
7
14. if y  z
15. then key[z] ← key[y]
16. copy y’s satellite data into z
17. return y Running time: O(h)
Binary Search Trees - Summary
• Operations on binary search trees:
• SEARCH O(h)
• PREDECESSOR O(h)
• SUCCESOR O(h)
• MINIMUM O(h)
• MAXIMUM O(h)
• INSERT O(h)
• DELETE O(h)
• These operations are fast if the height of the tree is
small – otherwise their performance is similar to that
of a linked list
Theorem
• The number of comparisons performed by Algorithm BINARY SEARCH on a sorted array
of size n is at most 𝒍𝒐𝒈 𝒏 + 𝟏

• Proof:
• Fact: If T is a binary tree with n nodes whose height is h, then,
𝑛 ≥ ℎ ≥ log(𝑛 + 1).
max min

• In a binary tree the maximum number of nodes we can have in level i is 2i−1.
• This implies that the maximum number of nodes we can have in a binary tree of height h is 1 + 2 + 4+··· + 2h = 2h − 1.
• In other words, n ≤ 2h − 1 ➔ log2 n ≤ h-1 ➔ h ≥ log2 n +1
• The fact that h ≤ n is easy to see. This happens when we have a skewed tree where each node (other than the leaf) has a single
child.
• A binary tree is said to be full if every non-leaf has exactly two children and all the leaves are at the same level. A binary tree is
defined to be complete if it is full except that there could be some nodes missing in the last level and the missing nodes in the last
level (if any) are right justified. The height of a complete binary tree with n nodes is Θ(logn).
Binary Search
• We can do “better” than linear search if we knew that the elements of A are sorted, say in non-decreasing order
(increasing order).

• As stated above, we are interested in determining whether a given element is among the list of elements stored in the
array of size n.

• When the elements are sorted, we use a more efficient algorithm such as the binary search algorithm as follows:
• Let A[low.. high] be a non-empty array of elements sorted in nondecreasing order and let A[mid] be the middle
element.
• The idea is that you can compare x to the middle element of A, say A[middle].
• If x> A[mid]:
• We observe that If x is in A, then it must be one of the elements A[mid+1], A[mid+2],…., A[mid+ high]
• It follows that we only need to search for in A[mid+1… high].
• In other words, the entries in A[low…mid] are discarded.
• Similarly, if x< A[mid], then we only need to search for in A[low…mid-1]

• This results in an efficient strategy, which is referred to as binary search.


Binary Search Example
Binary Search Example
• Example 1 (Cont.) Binary Search Example
• Finally, x is compared with A[14] (=35) as

𝑚𝑖𝑑 = (14 + 14)/2 = 14

and hence the search is successively completed.


Binary Search Algorithm
Algorithm: BINARY_SEARCH
Input: An array A[1..n] of n elements sorted in non-decreasing order
and an element x.
Output: j if x = A[j], 1 ≤ j ≤ n, and 0 otherwise.
1. low = 1; high = n; j = 0
2. while (low ≤ high) and (j = 0)
3. mid = (low + high)/2
4. if x = A[mid] then j = mid
5. else if x < A[mid] then high = mid – 1
6. else low = mid + 1 Can be considered
as one comparison
7. end while
8. return j

You might also like