SlideShare a Scribd company logo
Python Programming
Unit-5
Iterators in Python
• Iterator in python is an object that is used to iterate over
iterable objects like lists, tuples, dicts, and sets.
• The iterator object is initialized using the iter() method. It
uses the next() method for iteration.
• __iter(iterable)__ method that is called for the initialization
of an iterator.
• This returns an iterator object next ( __next__ in Python
3) The next method returns the next value for the iterable.
• When we use a for loop to traverse any iterable object,
internally it uses the iter() method to get an iterator object
which further uses next() method to iterate over.
• This method raises a StopIteration to signal the end of the
iteration.
How an iterator really works in python
# Here is an example of a python inbuilt iterator
# value can be anything which can be iterate
iterable_value = 'Goods'
iterable_obj = iter(iterable_value)
while True:
try:
# Iterate by calling next
item = next(iterable_obj)
print(item)
except StopIteration:
# exception will happen when iteration will over
break
Output :
G
o
o
d
s
• Below is a simple Python custom iterator that
creates iterator type that iterates from 10 to a
given limit.
• For example, if the limit is 15, then it prints 10
11 12 13 14 15.
• And if the limit is 5, then it prints nothing.
# A simple Python program to
demonstrate
# working of iterators using an
example type
# that iterates from 10 to given value
# An iterable user defined type
class Test:
# Constructor
def __init__(self, limit):
self.limit = limit
# Creates iterator object
# Called when iteration is
initialized
def __iter__(self):
self.x = 10
return self
# To move to next element. In Python 3,
# we should replace next with __next__
def __next__(self):
# Store current value ofx
x = self.x
# Stop iteration if limit is
reached
if x > self.limit:
raise StopIteration
# Else increment and return
old value
self.x = x + 1;
return x
# Prints numbers from 10 to 15
for i in Test(15):
print(i)
# Prints nothing
for i in Test(5):
print(i)
Output
• 10
• 11
• 12
• 13
• 14
• 15
Difference between Recursion and
Iteration
• A program is called recursive when an entity
calls itself. A program is call iterative when
there is a loop (or repetition).
• Example: Program to find the factorial of a
number
# Python3 program to find factorial of given number
# ----- Recursion -----
# method to find factorial of given number
def factorialUsingRecursion(n):
if (n == 0):
return 1;
# recursion call
return n * factorialUsingRecursion(n - 1);
# ----- Iteration -----
# Method to find the factorial of a given number
def factorialUsingIteration(n):
res = 1;
# using iteration
for i in range(2, n + 1):
res *= i;
return res;
# Driver method
num = 5;
print("Factorial of",num,"using Recursion is:", factorialUsingRecursion(5));
print("Factorial of",num,"using Iteration is:", factorialUsingIteration(5));
Output:
Factorial of 5 using Recursion is: 120
Factorial of 5 using Iteration is: 120
• Time Complexity: Finding the Time complexity
of Recursion is more difficult than that of
Iteration.
– Recursion: Time complexity of recursion can be
found by finding the value of the nth recursive call
in terms of the previous calls.
– Thus, finding the destination case in terms of the
base case, and solving in terms of the base case
gives us an idea of the time complexity of
recursive equations.
– Iteration: Time complexity of iteration can be
found by finding the number of cycles being
repeated inside the loop.
• Usage: Usage of either of these techniques is a
trade-off between time complexity and size of
code. If time complexity is the point of focus, and
number of recursive calls would be large, it is
better to use iteration. However, if time
complexity is not an issue and shortness of code
is, recursion would be the way to go.
– Recursion: Recursion involves calling the same
function again, and hence, has a very small length of
code. However, as we saw in the analysis, the time
complexity of recursion can get to be exponential
when there are a considerable number of recursive
calls. Hence, usage of recursion is advantageous in
shorter code, but higher time complexity.
– Iteration: Iteration is repetition of a block of code.
This involves a larger size of code, but the time
complexity is generally lesser than it is for recursion.
• Overhead: Recursion has a large amount of Overhead
as compared to Iteration.
– Recursion: Recursion has the overhead of repeated
function calls, that is due to repetitive calling of the same
function, the time complexity of the code increases
manifold.
– Iteration: Iteration does not involve any such overhead.
• Infinite Repetition: Infinite Repetition in recursion can
lead to CPU crash but in iteration, it will stop when
memory is exhausted.
– Recursion: In Recursion, Infinite recursive calls may occur
due to some mistake in specifying the base condition,
which on never becoming false, keeps calling the function,
which may lead to system CPU crash.
– Iteration: Infinite iteration due to mistake in iterator
assignment or increment, or in the terminating condition,
will lead to infinite loops, which may or may not lead to
system errors, but will surely stop program execution any
further.
Fibonacci sequence:
• A Fibonacci sequence is a sequence of integers
which first two terms are 0 and 1 and all other
terms of the sequence are obtained by adding
their preceding two numbers.
• For example: 0, 1, 1, 2, 3, 5, 8, 13 and so on...
example
1. def recur_fibo(n):
2. if n <= 1:
3. return n
4. else:
5. return(recur_fibo(n-1) + recur_fibo(n-2))
6. # take input from the user
7. nterms = int(input("How many terms? "))
8. # check if the number of terms is valid
9. if nterms <= 0:
10. print("Please enter a positive integer")
11. else:
12. print("Fibonacci sequence:")
13. for i in range(nterms):
14. print(recur_fibo(i))
Output:
Recursion
• The process in which a function calls itself
directly or indirectly is called recursion and
the corresponding function is called as
recursive function.
• Using recursive algorithm, certain problems
can be solved quite easily.
• Examples of such problems are Towers of
Hanoi (TOH), Inorder/Preorder/Postorder Tree
Traversals, DFS of Graph, etc.
A Mathematical Interpretation
• Let us consider a problem that a programmer have to determine the sum
of first n natural numbers, there are several ways of doing that but the
simplest approach is simply add the numbers starting from 1 to n.
• So the function simply looks like,
• approach(1) – Simply adding one by one
• f(n) = 1 + 2 + 3 +……..+ n
• but there is another mathematical approach of representing this,
• approach(2) – Recursive adding
• f(n) = 1 n=1
• f(n) = n + f(n-1) n>1
• There is a simple difference between the approach (1) and approach(2)
and that is in approach(2) the function “ f( ) ” itself is being called inside
the function, so this phenomenon is named as recursion and the function
containing recursion is called recursive function, at the end this is a great
tool in the hand of the programmers to code some problems in a lot easier
and efficient way.
Example
# A Python 3 program to
# demonstrate working of
# recursion
def printFun(test):
if (test < 1):
return
else:
print(test, end=" ")
printFun(test-1) # statement 2
print(test, end=" ")
return
# Driver Code
test = 3
printFun(test)
• Output :
• 3 2 1 1 2 3
# Python code to implement Fibonacci series
# Function for fibonacci
def fib(n):
# Stop condition
if (n == 0):
return 0
# Stop condition
if (n == 1 or n == 2):
return 1
# Recursion function
else:
return (fib(n - 1) + fib(n - 2))
# Driver Code
# Initialize variable n.
n = 5;
print("Fibonacci series of 5 numbers is :",end=" ")
# for loop to print the fiboancci series.
for i in range(0,n):
print(fib(i),end=" ")
Output
Fibonacci series of 5 numbers is: 0 1 1 2 3
Tower of Hanoi
• Tower of Hanoi is a mathematical puzzle where we
have three rods and n disks.
• The objective of the puzzle is to move the entire stack
to another rod, obeying
• The following simple rules:
1) Only one disk can be moved at a time.
2) Each move consists of taking the upper disk from one
of the stacks and placing it on top of another stack i.e.
a disk can only be moved if it is the uppermost disk on
a stack.
3) No disk may be placed on top of a smaller disk.
Python Programming unit5 (1).pdf
# Recursive Python function to solve
tower of hanoi
def TowerOfHanoi(n , from_rod, to_rod, aux_rod):
if n == 1:
print "Move disk 1 from rod",from_rod,"to
rod",to_rod
return
TowerOfHanoi(n-1, from_rod, aux_rod, to_rod)
print "Move disk",n,"from rod",from_rod,"to rod",to_rod
TowerOfHanoi(n-1, aux_rod, to_rod, from_rod)
# Driver code
n = 4
TowerOfHanoi(n, 'A', 'C', 'B')
# A, C, B are the name of rods
Output:
• Move disk 1 from rod A to rod B
• Move disk 2 from rod A to rod C
• Move disk 1 from rod B to rod C
• Move disk 3 from rod A to rod B
• Move disk 1 from rod C to rod A
• Move disk 2 from rod C to rod B
• Move disk 1 from rod A to rod B
• Move disk 4 from rod A to rod C
• Move disk 1 from rod B to rod C
• Move disk 2 from rod B to rod A
• Move disk 1 from rod C to rod A
• Move disk 3 from rod B to rod C
• Move disk 1 from rod A to rod B
• Move disk 2 from rod A to rod C
• Move disk 1 from rod B to rod C
Search Algorithms in Python
• Searching for data stored in different data structures is
a crucial part of pretty much every single application.
• There are many different algorithms available to utilize
when searching, and each have different
implementations and rely on different data structures
to get the job done.
• Being able to choose a specific algorithm for a given
task is a key skill for developers and can mean the
difference between a fast, reliable and stable
application and an application that crumbles from a
simple request.
• Membership Operators
• Linear Search
• Binary Search
• Jump Search
• Fibonacci Search
• Exponential Search
• Interpolation Search
Membership Operators
• Algorithms develop and become optimized over time as a result
of constant evolution and the need to find the most efficient
solutions for underlying problems in different domains.
• One of the most common problems in the domain of Computer
Science is searching through a collection and determining
whether a given object is present in the collection or not.
• Almost every programming language has its own
implementation of a basic search algorithm, usually as a function
which returns a Boolean value of True or False when an item is
found in a given collection of items.
• In Python, the easiest way to search for an object is to use
Membership Operators - named that way because they allow us
to determine whether a given object is a member in a collection.
• These operators can be used with any iterable data
structure in Python, including Strings, Lists, and Tuples.
• in - Returns True if the given element is a part of the
structure.
• not in - Returns True if the given element is not a part of the
structure.
>>> 'apple' in ['orange', 'apple', 'grape']
True
>>> 't' in 'stackabuse'
True
>>> 'q' in 'stackabuse'
False
>>> 'q' not in 'stackabuse'
True
• Membership operators suffice when all we need to do
is find whether a substring exists within a given string,
or determine whether two Strings, Lists, or Tuples
intersect in terms of the objects they hold.
• In most cases we need the position of the item in the
sequence, in addition to determining whether or not it
exists; membership operators do not meet this
requirement.
• There are many search algorithms that don't depend
on built-in operators and can be used to search for
values faster and/or more efficiently.
• In addition, they can yield more information, such as
the position of the element in the collection, rather
than just being able to determine its existence.
Linear Search
• Linear search is one of the simplest searching algorithms, and the easiest
to understand.
• We can think of it as a ramped-up version of our own implementation of
Python's in operator.
• The algorithm consists of iterating over an array and returning the index of
the first occurrence of an item once it is found:
def LinearSearch(lys, element):
for i in range (len(lys)):
if lys[i] == element:
return i
return -1
So if we use the function to compute:
>>> print(LinearSearch([1,2,3,4,5,2,1], 2))
Upon executing the code, we're greeted with:
1
• This is the index of the first occurrence of the item we
are searching for - keeping in mind that Python indexes
are 0-based.
• The time complexity of linear search is O(n), meaning
that the time taken to execute increases with the
number of items in our input list lys.
• Linear search is not often used in practice, because the
same efficiency can be achieved by using inbuilt
methods or existing operators, and it is not as fast or
efficient as other search algorithms.
• Linear search is a good fit for when we need to find the
first occurrence of an item in an unsorted collection
because unlike most other search algorithms, it does
not require that a collection be sorted before searching
begins.
• Problem: Given an array arr[] of n elements, write a
function to search a given element x in arr[].
• Examples :
• Input : arr[] = {10, 20, 80, 30, 60, 50, 110, 100,
130, 170}
x = 110;
• Output : 6
• Element x is present at index 6
• Input : arr[] = {10, 20, 80, 30, 60, 50, 110, 100,
130, 170}
x = 175;
• Output : -1
• Element x is not present in arr[].
• A simple approach is to do a linear search, i.e
Start from the leftmost element of arr[] and one
by one compare x with each element of arr[]
• If x matches with an element, return the index.
• If x doesn’t match with any of elements, return -
1.
# Python3 code to linearly search x in arr[].
# If x is present then return its location,
# otherwise return -1
def search(arr, n, x):
for i in range(0, n):
if (arr[i] == x):
return i
return -1
# Driver Code
arr = [2, 3, 4, 10, 40]
x = 10
n = len(arr)
# Function call
result = search(arr, n, x)
if(result == -1):
print("Element is not present in array")
else:
print("Element is present at index", result)
Output
Element is present at index 3
Binary Search
• Binary search follows a divide and conquer methodology.
• It is faster than linear search but requires that the array be sorted
before the algorithm is executed.
• Assuming that we're searching for a value val in a sorted array, the
algorithm compares val to the value of the middle element of the
array, which we'll call mid.
• If mid is the element we are looking for (best case), we return its
index.
• If not, we identify which side of mid val is more likely to be on
based on whether val is smaller or greater than mid, and discard
the other side of the array.
• We then recursively or iteratively follow the same steps, choosing a
new value for mid, comparing it with val and discarding half of the
possible matches in each iteration of the algorithm.
• The binary search algorithm can be written either recursively or iteratively.
• Recursion is generally slower in Python because it requires the allocation
of new stack frames.
• Since a good search algorithm should be as fast and accurate as possible,
let's consider the iterative implementation of binary search:
def BinarySearch(lys, val):
first = 0
last = len(lys)-1
index = -1
while (first <= last) and (index == -1):
mid = (first+last)//2
if lys[mid] == val:
index = mid
else:
if val<lys[mid]:
last = mid -1
else:
first = mid +1
return index
If we use the function to compute:
>>> BinarySearch([10,20,30,40,50], 20)
We get the result:
1
Which is the index of the value that we are searching
for.
• The action that the algorithm performs next in each
iteration is one of several possibilities:
• Returning the index of the current element
• Searching through the left half of the array
• Searching through the right half of the array
• We can only pick one possibility per iteration, and our
pool of possible matches gets divided by two in each
iteration. This makes the time complexity of binary
search O(log n).
• One drawback of binary search is that if there are multiple occurrences of
an element in the array, it does not return the index of the first element,
but rather the index of the element closest to the middle:
• >>> print(BinarySearch([4,4,4,4,4], 4))
• Running this piece of code will result in the index of the middle element:
• 1
• For comparison performing a linear search on the same array would
return:
• 0
• Which is the index of the first element. However, we cannot categorically
say that binary search does not work if an array contains the same
element twice - it can work just like linear search and return the first
occurrence of the element in some cases.
• If we perform binary search on the array [1,2,3,4,4,5] for instance, and
search for 4, we would get 3 as the result.
• Binary search is quite commonly used in practice because it is efficient and
fast when compared to linear search.
• However, it does have some shortcomings, such as its reliance on the //
operator.
• There are many other divide and conquer search algorithms that are
derived from binary search, let's examine a few of those next.
Fibonacci Search
• Fibonacci search is another divide and conquer algorithm which bears
similarities to both binary search and jump search.
• It gets its name because it uses Fibonacci numbers to calculate the block
size or search range in each step.
• Fibonacci numbers start with zero and follow the pattern 0, 1, 1, 2, 3, 5, 8,
13, 21... where each element is the addition of the two numbers that
immediately precede it.
• The algorithm works with three Fibonacci numbers at a time. Let's call the
three numbers fibM, fibM_minus_1, and fibM_minus_2 where
fibM_minus_1 and fibM_minus_2 are the two numbers immediately
before fibM in the sequence:
• fibM = fibM_minus_1 + fibM_minus_2
• We initialize the values to 0,1, and 1 or the first three numbers in the
Fibonacci sequence to avoid getting an index error in the case where our
search array lys contains a very small number of items.
• Then we choose the smallest number of the Fibonacci sequence that is
greater than or equal to the number of elements in our search array lys, as
the value of fibM, and the two Fibonacci numbers immediately before it as
the values of fibM_minus_1 and fibM_minus_2.
• While the array has elements remaining and the value
of fibM is greater than one, we:
• Compare val with the value of the block in the range
up to fibM_minus_2, and return the index of the
element if it matches.
• If the value is greater than the element we are
currently looking at, we move the values of fibM,
fibM_minus_1 and fibM_minus_2 two steps down in
the Fibonacci sequence, and reset the index to the
index of the element.
• If the value is less than the element we are currently
looking at, we move the values of fibM, fibM_minus_1
and fibM_minus_2 one step down in the Fibonacci
sequence.
• Let's take a look at the Python implementation of this algorithm:
def FibonacciSearch(lys, val):
fibM_minus_2 = 0
fibM_minus_1 = 1
fibM = fibM_minus_1 + fibM_minus_2
while (fibM < len(lys)):
fibM_minus_2 = fibM_minus_1
fibM_minus_1 = fibM
fibM = fibM_minus_1 + fibM_minus_2
index = -1;
while (fibM > 1):
i = min(index + fibM_minus_2, (len(lys)-1))
if (lys[i] < val):
fibM = fibM_minus_1
fibM_minus_1 = fibM_minus_2
fibM_minus_2 = fibM - fibM_minus_1
index = i
elif (lys[i] > val):
fibM = fibM_minus_2
fibM_minus_1 = fibM_minus_1 - fibM_minus_2
fibM_minus_2 = fibM - fibM_minus_1
else :
return i
if(fibM_minus_1 and index < (len(lys)-1) and lys[index+1] == val):
return index+1;
return -1
• If we use the FibonacciSearch function to compute:
>>> print(FibonacciSearch([1,2,3,4,5,6,7,8,9,10,11], 6))
• Let's take a look at the step-by-step process of this
search:
• Determining the smallest Fibonacci number greater
than or equal to the length of the list as fibM; in this
case, the smallest Fibonacci number meeting our
requirements is 13.
• The values would be assigned as:
o fibM = 13
o fibM_minus_1 = 8
o fibM_minus_2 = 5
o index = -1
• Next, we check the element lys[4] where 4 is the minimum of -1+5 .
Since the value of lys[4] is 5, which is smaller than the value we are
searching for, we move the Fibonacci numbers one step down in the
sequence, making the values:
o fibM = 8
o fibM_minus_1 = 5
o fibM_minus_2 = 3
o index = 4
• Next, we check the element lys[7] where 7 is the minimum of 4+3.
Since the value of lys[7] is 8, which is greater than the value we are
searching for, we move the Fibonacci numbers two steps down in
the sequence.
o fibM = 3
o fibM_minus_1 = 2
o fibM_minus_2 = 1
o index = 4
• Now we check the element lys[5] where 5 is the minimum of 4+1 .
The value of lys[5] is 6, which is the value we are searching for!
• The result, as expected is:5
• The time complexity for Fibonacci search is O(log n);
the same as binary search.
• This means the algorithm is faster than both linear
search and jump search in most cases.
• Fibonacci search can be used when we have a very
large number of elements to search through, and we
want to reduce the inefficiency associated with using
an algorithm which relies on the division operator.
• An additional advantage of using Fibonacci search is
that it can accommodate input arrays that are too large
to be held in CPU cache or RAM, because it searches
through elements in increasing step sizes, and not in a
fixed size.
Jump Search
• Jump Search is similar to binary search in that it works on a sorted
array, and uses a similar divide and conquer approach to search
through it.
• It can be classified as an improvement of the linear search
algorithm since it depends on linear search to perform the actual
comparison when searching for a value.
• Given a sorted array, instead of searching through the array
elements incrementally, we search in jumps.
• So in our input list lys, if we have a jump size of jump our algorithm
will consider elements in the order lys[0], lys[0+jump],
lys[0+2jump], lys[0+3jump] and so on.
• With each jump, we store the previous value we looked at and its
index. When we find a set of values where
lys[i]<element<lys[i+jump], we perform a linear search with lys[i] as
the left-most element and lys[i+jump] as the right-most element in
our search set:
import math
def JumpSearch (lys, val):
length = len(lys)
jump = int(math.sqrt(length))
left, right = 0, 0
while left < length and lys[left] <= val:
right = min(length - 1, left + jump)
if lys[left] <= val and lys[right] >= val:
break
left += jump;
if left >= length or lys[left] > val:
return -1
right = min(length - 1, right)
i = left
while i <= right and lys[i] <= val:
if lys[i] == val:
return i
i += 1
return -1
• Since this is a complex algorithm, let's consider the step-by-
step computation of jump search with this input:
• >>> print(JumpSearch([1,2,3,4,5,6,7,8,9], 5))
• Jump search would first determine the jump size by
computing math.sqrt(len(lys)). Since we have 9 elements,
the jump size would be √9 = 3.
• Next, we compute the value of the right variable, which is
the minimum of the length of the array minus 1, or the
value of left+jump, which in our case would be 0+3= 3.
Since 3 is smaller than 8 we use 3 as the value of right.
• Now we check whether our search element, 5, is between
lys[0] and lys[3]. Since 5 is not between 1 and 4, we move
on.
• Next, we do the calculations again and check whether our
search element is between lys[3] and lys[6], where 6 is
3+jump. Since 5 is between 4 and 7, we do a linear search
on the elements between lys[3] and lys[6] and return the
index of our element as:4
• The time complexity of jump search is O(√n), where √n is
the jump size, and n is the length of the list, placing jump
search between the linear search and binary search
algorithms in terms of efficiency.
• The single most important advantage of jump search when
compared to binary search is that it does not rely on the
division operator (/).
• In most CPUs, using the division operator is costly when
compared to other basic arithmetic operations (addition,
subtraction, and multiplication), because the
implementation of the division algorithm is iterative.
• The cost by itself is very small, but when the number of
elements to search through is very large, and the number
of division operations that we need to perform increases,
the cost can add up incrementally.
• Therefore jump search is better than binary search when
there is a large number of elements in a system where even
a small increase in speed matters.
• Subscribe to our Newsletter
• Get occassional tutorials, guides, and reviews
in your inbox. No spam ever. Unsubscribe at
any time.
• Subscribe
• To make jump search faster, we could use
binary search or another internal jump search
to search through the blocks, instead of
relying on the much slower linear search.
Exponential Search
• Exponential search is another search algorithm that can be
implemented quite simply in Python, compared to jump
search and Fibonacci search which are both a bit complex.
• It is also known by the names galloping search, doubling
search and Struzik search.
• Exponential search depends on binary search to perform
the final comparison of values.
• The algorithm works by:
• Determining the range where the element we're looking
for is likely to be
• Using binary search for the range to find the exact index
of the item
The Python implementation of the
exponential search algorithm is:
def ExponentialSearch(lys, val):
if lys[0] == val:
return 0
index = 1
while index < len(lys) and lys[index] <= val:
index = index * 2
return BinarySearch( arr[:min(index, len(lys))],
val)
If we use the function to find the value of:
>>> print(ExponentialSearch([1,2,3,4,5,6,7,8],3))
• The algorithm works by:
• Checking whether the first element in the list matches
the value we are searching for - since lys[0] is 1 and we are
searching for 3, we set the index to 1 and move on.
• Going through all the elements in the list, and while the
item at the index'th position is less than or equal to our
value, exponentially increasing the value of index in
multiples of two:
o index = 1, lys[1] is 2, which is less than 3, so the index
is multiplied by 2 and set to 2.
o index = 2, lys[2]is 3, which is equal to 3, so the index
is multiplied by 2 and set to 4.
o index = 4, lys[4] is 5, which is greater than 3; the loop
is broken at this point.
• It then performs a binary search by slicing the list; arr[:4].
In Python, this means that the sub list will contain all
elements up to the 4th element, so we're actually calling:
• >>> Binary Search([1,2,3,4], 3)
• which would return:2
• Which is the index of the element we are searching for in
both the original list, and the sliced list that we pass on to
the binary search algorithm.
• Exponential search runs in O(log i) time, where i is the index
of the item we are searching for.
• In its worst case, the time complexity is O(log n), when the
last item is the item we are searching for (n being the
length of the array).
• Exponential search works better than binary search when
the element we are searching for is closer to the beginning
of the array.
• In practice, we use exponential search because it is one of
the most efficient search algorithms for unbounded or
infinite arrays.
Interpolation Search
• Interpolation search is another divide and conquer algorithm, similar to
binary search.
• Unlike binary search, it does not always begin searching at the middle.
Interpolation search calculates the probable position of the element we
are searching for using the formula:
• index = low + [(val-lys[low])*(high-low) / (lys[high]-lys[low])]
• Where the variables are:
• lys - our input array
• val - the element we are searching for
• index - the probable index of the search element. This is computed to be
a higher value when val is closer in value to the element at the end of the
array (lys[high]), and lower when val is closer in value to the element at
the start of the array (lys[low])
• low - the starting index of the array
• high - the last index of the array
• The algorithm searches by calculating the
value of index:
• If a match is found (when lys[index] == val),
the index is returned
• If the value of val is less than lys[index], the
value for the index is re-calculated using the
formula for the left sub-array
• If the value of val is greater than lys[index],
the value for the index is re-calculated using
the formula for the right sub-array
Let's go ahead and implement the
Interpolation search using Python:
def InterpolationSearch(lys, val):
low = 0
high = (len(lys) - 1)
while low <= high and val >= lys[low] and val <= lys[high]:
index = low + int(((float(high - low) / ( lys[high] - lys[low])) * ( val -
lys[low])))
if lys[index] == val:
return index
if lys[index] < val:
low = index + 1;
else:
high = index - 1;
return -1
If we use the function to compute:
>>> print(InterpolationSearch([1,2,3,4,5,6,7,8], 6))
• Our initial values would be:
• val = 6,
• low = 0,
• high = 7,
• lys[low] = 1,
• lys[high] = 8,
• index = 0 + [(6-1)*(7-0)/(8-1)] = 5
• Since lys[5] is 6, which is the value we are searching for, we stop executing
and return the result: 5
• If we have a large number of elements, and our index cannot be computed
in one iteration, we keep on re-calculating values for index after adjusting
the values of high and low in our formula.
• The time complexity of interpolation search is O(log log n) when values
are uniformly distributed.
• If values are not uniformly distributed, the worst-case time complexity is
O(n), the same as linear search.
• Interpolation search works best on uniformly distributed, sorted arrays.
• Whereas binary search starts in the middle and always divides into two,
interpolation search calculates the likely position of the element and
checks the index, making it more likely to find the element in a smaller
number of iterations.
Why Use Python For Searching?
• Python is highly readable and efficient when compared to older programming
languages like Java, Fortran, C, C++ etc. One key advantage of using Python for
implementing search algorithms is that you don't have to worry about casting or
explicit typing.
• In Python, most of the search algorithms we discussed will work just as well if
we're searching for a String.
• Keep in mind that we do have to make changes to the code for algorithms which
use the search element for numeric calculations, like the interpolation search
algorithm.
• Python is also a good place to start if you want to compare the performance of
different search algorithms for your dataset; building a prototype in Python is
easier and faster because you can do more with fewer lines of code.
• To compare the performance of our implemented search algorithms against a
dataset, we can use the time library in Python:
import time
start = time.time()
# call the function here
end = time.time()
print(start-end)
Conclusion
• There are many possible ways to search for an element within a collection.
• In this, we attempted to discuss a few search algorithms and their
implementations in Python.
• Choosing which algorithm to use is based on the data you have to search
through; your input array, which we've called lys in all our
implementations.
• If you want to search through an unsorted array or to find the first
occurrence of a search variable, the best option is linear search.
• If you want to search through a sorted array, there are many options of
which the simplest and fastest method is binary search.
• If you have a sorted array that you want to search through without using
the division operator, you can use either jump search or Fibonacci search.
• If you know that the element you're searching for is likely to be closer to
the start of the array, you can use exponential search.
• If your sorted array is also uniformly distributed, the fastest and most
efficient search algorithm to use would be interpolation search.
Python program to implement Binary
Search Algorithm
• Binary Search as the name suggests binary, here the list is divided
into halves and then searched in each half.
• One notable thing about this binary search is that the list should be
sorted first before executing the algorithm.
• The list is divided into two halves by the index, find the mid
element of the list and then start to mid-1 is one list and mid+1 to
end is another list, check if the element is mid, greater than it or
less than it and return the appropriate position of the key element
to be found.
• Binary Search Algorithm – Code Visualization
• Time Complexity
• Best Case O(1)
• Average Case O(log n)
• Worst Case O(log n)
Algorithm
• Given a list L of n elements with values or records L0, L1, …,
Ln-1, sorted in ascending order, and given key/target value
K, binary search is used to find the index of K in L
1. Set a variable start to 0 and another variable end to n-1(size
of the list)
2. if start > end break the algorithm, as it works only on sorted
lists
3. calculate mid as (start + end)/2
4. if the key element is
A. equal to mid, the search is done return position of mid
B. greater than mid, set start to mid + 1 and repeat from
step 2
C. less than mid, set end to mid - 1 and repeat from step 2
Program
__author__ = 'Avinash'
def binary_sort(sorted_list, length, key):
start = 0
end = length-1
while start <= end:
mid = int((start + end)/2)
if key == sorted_list[mid]:
print("nEntered number %d is present at position: %d" % (key, mid))
return -1
elif key < sorted_list[mid]:
end = mid - 1
elif key > sorted_list[mid]:
start = mid + 1
print("nElement not found!")
return -1
lst = []
size = int(input("Enter size of list: t"))
for n in range(size):
numbers = int(input("Enter any number: t"))
lst.append(numbers)
lst.sort()
print('nnThe list will be sorted, the sorted list is:', lst)
x = int(input("nEnter the number to search: "))
binary_sort(lst, size, x)
Output
Output continue....
Python - Sorting Algorithms
• Sorting refers to arranging data in a particular format. Sorting
algorithm specifies the way to arrange data in a particular order.
• Most common orders are in numerical or lexicographical order.
• The importance of sorting lies in the fact that data searching can be
optimized to a very high level, if data is stored in a sorted manner.
• Sorting is also used to represent data in more readable formats.
Below we see five such implementations of sorting in python.
• Bubble Sort
• Merge Sort
• Insertion Sort
• Shell Sort
• Selection Sort
Bubble Sort
• This simple sorting algorithm iterates over a list, comparing elements in pairs and
swapping them until the larger elements "bubble up" to the end of the list, and
the smaller elements stay at the "bottom".
def bubblesort(list):
# Swap the elements to arrange in order
for iter_num in range(len(list)-1,0,-1):
for idx in range(iter_num):
if list[idx]>list[idx+1]:
temp = list[idx]
list[idx] = list[idx+1]
list[idx+1] = temp
list = [19,2,31,45,6,11,121,27]
bubblesort(list)
print(list)
• When the above code is executed, it produces the following result −
• [2, 6, 11, 19, 27, 31, 45, 121]
Selection Sort
• In selection sort we start by finding the minimum
value in a given list and move it to a sorted list.
• Then we repeat the process for each of the
remaining elements in the unsorted list.
• The next element entering the sorted list is
compared with the existing elements and placed
at its correct position.
• So at the end all the elements from the unsorted
list are sorted.
def selection_sort(input_list):
for idx in range(len(input_list)):
min_idx = idx
for j in range( idx +1, len(input_list)):
if input_list[min_idx] > input_list[j]:
min_idx = j
# Swap the minimum value with the compared value
input_list[idx], input_list[min_idx] =
input_list[min_idx], input_list[idx]
l = [19,2,31,45,30,11,121,27]
selection_sort(l)
print(l)
• When the above code is executed, it produces the following
result −
• [2, 11, 19, 27, 30, 31, 45, 121]
Python Program for Selection Sort
• The selection sort algorithm sorts an array by
repeatedly finding the minimum element (considering
ascending order) from unsorted part and putting it at
the beginning.
• The algorithm maintains two subarrays in a given
array.
1) The subarray which is already sorted.
2) Remaining subarray which is unsorted.
• In every iteration of selection sort, the minimum
element (considering ascending order) from the
unsorted subarray is picked and moved to the sorted
subarray.
# Python program for implementation of Selection
# Sort
import sys
A = [64, 25, 12, 22, 11]
# Traverse through all array elements
for i in range(len(A)):
# Find the minimum element in remaining
# unsorted array
min_idx = i
for j in range(i+1, len(A)):
if A[min_idx] > A[j]:
min_idx = j
# Swap the found minimum element with
# the first element
A[i], A[min_idx] = A[min_idx], A[i]
# Driver code to test above
print ("Sorted array")
for i in range(len(A)):
print("%d" %A[i]),
Merge Sort
Merge sort first divides the array into equal halves and then combines them in a sorted manner.
def merge_sort(unsorted_list):
if len(unsorted_list) <= 1:
return unsorted_list
# Find the middle point and devide it
middle = len(unsorted_list) // 2
left_list = unsorted_list[:middle]
right_list = unsorted_list[middle:]
left_list = merge_sort(left_list)
right_list = merge_sort(right_list)
return list(merge(left_list, right_list))
# Merge the sorted halves
def merge(left_half,right_half):
res = []
while len(left_half) != 0 and len(right_half) != 0:
if left_half[0] < right_half[0]:
res.append(left_half[0])
left_half.remove(left_half[0])
else:
res.append(right_half[0])
right_half.remove(right_half[0])
if len(left_half) == 0:
res = res + right_half
else:
res = res + left_half
return res
unsorted_list = [64, 34, 25, 12, 22, 11, 90]
print(merge_sort(unsorted_list))
• When the above code is executed, it produces the following result −
• [11, 12, 22, 25, 34, 64, 90]
Python program for Merging of list
elements
• Sometimes, we require to merge some of the
elements as single element in the list.
• This is usually with the cases with character to
string conversion.
• This type of task is usually required in the
development domain to merge the names into
one element.
• Let’s discuss certain ways in which this can be
performed.
• Method #1 : Using join() + List Slicing
• The join function can be coupled with list slicing which can perform
the task of joining each character in a range picked by the list slicing
functionality.
# Python3 code to demonstrate
# merging list elements
# using join() + list slicing
# initializing list
test_list = ['I', 'L', 'O', 'V', 'E', 'G', 'F', 'G']
# printing original list
print ("The original list is : " + str(test_list))
# using join() + list slicing
# merging list elements
test_list[5 : 8] = [''.join(test_list[5 : 8])]
# printing result
print ("The list after merging elements : " + str(test_list))
Output:
The original list is : ['I', 'L', 'O', 'V', 'E', 'G', 'F', 'G']
The list after merging elements : ['I', 'L', 'O', 'V', 'E', 'GFG']
• Method #2 : Using reduce() + lambda + list slicing
• The task of joining each element in a range is performed by reduce
function and lambda. Reduce function performs the task for each element
in the range which is defined by the lambda function.
• It works with Python2 only
# Python code to demonstrate
# merging list elements
# using reduce() + lambda + list slicing
# initializing list
test_list = ['I', 'L', 'O', 'V', 'E', 'G', 'F', 'G']
# printing original list
print ("The original list is : " + str(test_list))
# using reduce() + lambda + list slicing
# merging list elements
test_list[5 : 8] = [reduce(lambda i, j: i + j, test_list[5 : 8])]
# printing result
print ("The list after merging elements : " + str(test_list))
• Output:
• The original list is : ['I', 'L', 'O', 'V', 'E', 'G', 'F', 'G']
• The list after merging elements : ['I', 'L', 'O', 'V', 'E', 'GFG']
Python Program for Merge Sort
• Merge Sort is a Divide and Conquer algorithm.
• It divides input array in two halves, calls itself for
the two halves and then merges the two sorted
halves.
• The merge() function is used for merging two
halves.
• The merge(arr, l, m, r) is key process that assumes
that arr[l..m] and arr[m+1..r] are sorted and
merges the two sorted sub-arrays into one.
# Python program for implementation of MergeSort
# Merges two subarrays of arr[].
# First subarray is arr[l..m]
# Second subarray is arr[m+1..r]
def merge(arr, l, m, r):
n1 = m - l + 1
n2 = r- m
# create temp arrays
L = [0] * (n1)
R = [0] * (n2)
# Copy data to temp arrays L[] and R[]
for i in range(0 , n1):
L[i] = arr[l + i]
for j in range(0 , n2):
R[j] = arr[m + 1 + j]
# Merge the temp arrays back into arr[l..r]
i = 0 # Initial index of first subarray
j = 0 # Initial index of second subarray
k = l # Initial index of merged subarray
while i < n1 and j < n2 :
if L[i] <= R[j]:
arr[k] = L[i]
i += 1
else:
arr[k] = R[j]
j += 1
k += 1
# Copy the remaining elements of L[], if there
# are any
while i < n1:
arr[k] = L[i]
i += 1
k += 1
# Copy the remaining elements of R[], if there
# are any
while j < n2:
arr[k] = R[j]
j += 1
k += 1
# l is for left index and r is right index of the
# sub-array of arr to be sorted
def mergeSort(arr,l,r):
if l < r:
# Same as (l+r)/2, but avoids overflow for
# large l and h
m = (l+(r-1))/2
# Sort first and second halves
mergeSort(arr, l, m)
mergeSort(arr, m+1, r)
merge(arr, l, m, r)
# Driver code to test above
arr = [12, 11, 13, 5, 6, 7]
n = len(arr)
print ("Given array is")
i in range(n):
print ("%d" %arr[i]),
mergeSort(arr,0,n-1)
print ("nnSorted array is")
for i in range(n):
print ("%d" %arr[i]),
Output
• Given array is
• 12 11 13 5 6 7
• Sorted array is
• 5 6 7 11 12 13
Insertion Sort
• Insertion sort involves finding the right place for a given element in a sorted list. So
in beginning we compare the first two elements and sort them by comparing
them.
• Then we pick the third element and find its proper position among the previous
two sorted elements.
• This way we gradually go on adding more elements to the already sorted list by
putting them in their proper position.
def insertion_sort(InputList):
for i in range(1, len(InputList)):
j = i-1
nxt_element = InputList[i]
# Compare the current element with next one
while (InputList[j] > nxt_element) and (j >= 0):
InputList[j+1] = InputList[j]
j=j-1
InputList[j+1] = nxt_element
list = [19,2,31,45,30,11,121,27]
insertion_sort(list)
print(list)
• When the above code is executed, it produces the following result −
• [2, 11, 19, 27, 30, 31, 45, 121]

More Related Content

PDF
Python recursion
Prof. Dr. K. Adisesha
 
PPTX
Recursive Algorithms with their types and implementation
Ahmad177077
 
PPTX
Recursion part 2
Keerty Smile
 
PPTX
DSA. Data structure algorithm dRecursion.pptx
siddiqsid0006
 
PPTX
lecture4-recursion.pptx
Lizhen Shi
 
PPTX
Recursion and Sorting Algorithms
Afaq Mansoor Khan
 
PDF
6-Python-Recursion.pdf
AshishPalandurkar2
 
PPTX
Recursion vs. Iteration: Code Efficiency & Structure
cogaxor346
 
Python recursion
Prof. Dr. K. Adisesha
 
Recursive Algorithms with their types and implementation
Ahmad177077
 
Recursion part 2
Keerty Smile
 
DSA. Data structure algorithm dRecursion.pptx
siddiqsid0006
 
lecture4-recursion.pptx
Lizhen Shi
 
Recursion and Sorting Algorithms
Afaq Mansoor Khan
 
6-Python-Recursion.pdf
AshishPalandurkar2
 
Recursion vs. Iteration: Code Efficiency & Structure
cogaxor346
 

Similar to Python Programming unit5 (1).pdf (20)

PPTX
Python recursion
ToniyaP1
 
PDF
Data Structure Algorithm - Recursion Book Slides
zeeshanhaidermazhar7
 
PDF
Recursion CBSE Class 12
chinthala Vijaya Kumar
 
PDF
14. Recursion.pdf
VivekBhimajiyani
 
PPTX
Recursion with python
BimaSudarsonoAdinsa
 
PPTX
Full_Fixed_Enhanced_Recursion_Presentation.pptx
AliAbbas574107
 
PDF
Recursive Algorithms coded in Python.pdf
drthpeters
 
PPTX
Recursion with Python [Rev]
Dennis Walangadi
 
PPTX
Introduction to python programming ( part-2 )
Ziyauddin Shaik
 
PPTX
Recursion-in-Python for class third.pptx
raju909783
 
PDF
Chapter 7 recursion handouts with notes
mailund
 
PPTX
Chapter 02 functions -class xii
Praveen M Jigajinni
 
PPTX
UNIT1 data structure recursion model with example.pptx
SrikarKethiri
 
PPTX
6-Python-Recursion PPT.pptx
Venkateswara Babu Ravipati
 
PPTX
1P13 Python Review Session Covering various Topics
hussainmuhd1119
 
PPTX
McMullen_ProgwPython_1e_mod12_PowerPoint.pptx
tlui
 
PPTX
NPTEL complete.pptx.pptx
HimanshuKushwah38
 
PDF
Python lambda functions with filter, map & reduce function
ARVIND PANDE
 
PPTX
It is a way of programming or coding technique, in which a function calls it...
ssuser5ecd1a
 
PPTX
Enhanced_Recursion_Presennnnntation.pptx
AliAbbas574107
 
Python recursion
ToniyaP1
 
Data Structure Algorithm - Recursion Book Slides
zeeshanhaidermazhar7
 
Recursion CBSE Class 12
chinthala Vijaya Kumar
 
14. Recursion.pdf
VivekBhimajiyani
 
Recursion with python
BimaSudarsonoAdinsa
 
Full_Fixed_Enhanced_Recursion_Presentation.pptx
AliAbbas574107
 
Recursive Algorithms coded in Python.pdf
drthpeters
 
Recursion with Python [Rev]
Dennis Walangadi
 
Introduction to python programming ( part-2 )
Ziyauddin Shaik
 
Recursion-in-Python for class third.pptx
raju909783
 
Chapter 7 recursion handouts with notes
mailund
 
Chapter 02 functions -class xii
Praveen M Jigajinni
 
UNIT1 data structure recursion model with example.pptx
SrikarKethiri
 
6-Python-Recursion PPT.pptx
Venkateswara Babu Ravipati
 
1P13 Python Review Session Covering various Topics
hussainmuhd1119
 
McMullen_ProgwPython_1e_mod12_PowerPoint.pptx
tlui
 
NPTEL complete.pptx.pptx
HimanshuKushwah38
 
Python lambda functions with filter, map & reduce function
ARVIND PANDE
 
It is a way of programming or coding technique, in which a function calls it...
ssuser5ecd1a
 
Enhanced_Recursion_Presennnnntation.pptx
AliAbbas574107
 
Ad

Recently uploaded (20)

PDF
LEAP-1B presedntation xxxxxxxxxxxxxxxxxxxxxxxxxxxxx
hatem173148
 
PDF
settlement FOR FOUNDATION ENGINEERS.pdf
Endalkazene
 
PDF
Queuing formulas to evaluate throughputs and servers
gptshubham
 
PDF
Top 10 read articles In Managing Information Technology.pdf
IJMIT JOURNAL
 
PDF
67243-Cooling and Heating & Calculation.pdf
DHAKA POLYTECHNIC
 
PPTX
Information Retrieval and Extraction - Module 7
premSankar19
 
PDF
top-5-use-cases-for-splunk-security-analytics.pdf
yaghutialireza
 
PPTX
22PCOAM21 Session 1 Data Management.pptx
Guru Nanak Technical Institutions
 
PDF
오픈소스 LLM, vLLM으로 Production까지 (Instruct.KR Summer Meetup, 2025)
Hyogeun Oh
 
PPTX
easa module 3 funtamental electronics.pptx
tryanothert7
 
PPTX
AgentX UiPath Community Webinar series - Delhi
RohitRadhakrishnan8
 
PDF
Introduction to Ship Engine Room Systems.pdf
Mahmoud Moghtaderi
 
PPT
SCOPE_~1- technology of green house and poyhouse
bala464780
 
PPTX
ternal cell structure: leadership, steering
hodeeesite4
 
PPTX
business incubation centre aaaaaaaaaaaaaa
hodeeesite4
 
PPTX
22PCOAM21 Session 2 Understanding Data Source.pptx
Guru Nanak Technical Institutions
 
PDF
flutter Launcher Icons, Splash Screens & Fonts
Ahmed Mohamed
 
PPTX
EE3303-EM-I 25.7.25 electrical machines.pptx
Nagen87
 
PPTX
IoT_Smart_Agriculture_Presentations.pptx
poojakumari696707
 
PPTX
unit 3a.pptx material management. Chapter of operational management
atisht0104
 
LEAP-1B presedntation xxxxxxxxxxxxxxxxxxxxxxxxxxxxx
hatem173148
 
settlement FOR FOUNDATION ENGINEERS.pdf
Endalkazene
 
Queuing formulas to evaluate throughputs and servers
gptshubham
 
Top 10 read articles In Managing Information Technology.pdf
IJMIT JOURNAL
 
67243-Cooling and Heating & Calculation.pdf
DHAKA POLYTECHNIC
 
Information Retrieval and Extraction - Module 7
premSankar19
 
top-5-use-cases-for-splunk-security-analytics.pdf
yaghutialireza
 
22PCOAM21 Session 1 Data Management.pptx
Guru Nanak Technical Institutions
 
오픈소스 LLM, vLLM으로 Production까지 (Instruct.KR Summer Meetup, 2025)
Hyogeun Oh
 
easa module 3 funtamental electronics.pptx
tryanothert7
 
AgentX UiPath Community Webinar series - Delhi
RohitRadhakrishnan8
 
Introduction to Ship Engine Room Systems.pdf
Mahmoud Moghtaderi
 
SCOPE_~1- technology of green house and poyhouse
bala464780
 
ternal cell structure: leadership, steering
hodeeesite4
 
business incubation centre aaaaaaaaaaaaaa
hodeeesite4
 
22PCOAM21 Session 2 Understanding Data Source.pptx
Guru Nanak Technical Institutions
 
flutter Launcher Icons, Splash Screens & Fonts
Ahmed Mohamed
 
EE3303-EM-I 25.7.25 electrical machines.pptx
Nagen87
 
IoT_Smart_Agriculture_Presentations.pptx
poojakumari696707
 
unit 3a.pptx material management. Chapter of operational management
atisht0104
 
Ad

Python Programming unit5 (1).pdf

  • 2. Iterators in Python • Iterator in python is an object that is used to iterate over iterable objects like lists, tuples, dicts, and sets. • The iterator object is initialized using the iter() method. It uses the next() method for iteration. • __iter(iterable)__ method that is called for the initialization of an iterator. • This returns an iterator object next ( __next__ in Python 3) The next method returns the next value for the iterable. • When we use a for loop to traverse any iterable object, internally it uses the iter() method to get an iterator object which further uses next() method to iterate over. • This method raises a StopIteration to signal the end of the iteration.
  • 3. How an iterator really works in python # Here is an example of a python inbuilt iterator # value can be anything which can be iterate iterable_value = 'Goods' iterable_obj = iter(iterable_value) while True: try: # Iterate by calling next item = next(iterable_obj) print(item) except StopIteration: # exception will happen when iteration will over break Output : G o o d s
  • 4. • Below is a simple Python custom iterator that creates iterator type that iterates from 10 to a given limit. • For example, if the limit is 15, then it prints 10 11 12 13 14 15. • And if the limit is 5, then it prints nothing.
  • 5. # A simple Python program to demonstrate # working of iterators using an example type # that iterates from 10 to given value # An iterable user defined type class Test: # Constructor def __init__(self, limit): self.limit = limit # Creates iterator object # Called when iteration is initialized def __iter__(self): self.x = 10 return self # To move to next element. In Python 3, # we should replace next with __next__ def __next__(self): # Store current value ofx x = self.x # Stop iteration if limit is reached if x > self.limit: raise StopIteration # Else increment and return old value self.x = x + 1; return x # Prints numbers from 10 to 15 for i in Test(15): print(i) # Prints nothing for i in Test(5): print(i)
  • 6. Output • 10 • 11 • 12 • 13 • 14 • 15
  • 7. Difference between Recursion and Iteration • A program is called recursive when an entity calls itself. A program is call iterative when there is a loop (or repetition). • Example: Program to find the factorial of a number
  • 8. # Python3 program to find factorial of given number # ----- Recursion ----- # method to find factorial of given number def factorialUsingRecursion(n): if (n == 0): return 1; # recursion call return n * factorialUsingRecursion(n - 1); # ----- Iteration ----- # Method to find the factorial of a given number def factorialUsingIteration(n): res = 1; # using iteration for i in range(2, n + 1): res *= i; return res; # Driver method num = 5; print("Factorial of",num,"using Recursion is:", factorialUsingRecursion(5)); print("Factorial of",num,"using Iteration is:", factorialUsingIteration(5)); Output: Factorial of 5 using Recursion is: 120 Factorial of 5 using Iteration is: 120
  • 9. • Time Complexity: Finding the Time complexity of Recursion is more difficult than that of Iteration. – Recursion: Time complexity of recursion can be found by finding the value of the nth recursive call in terms of the previous calls. – Thus, finding the destination case in terms of the base case, and solving in terms of the base case gives us an idea of the time complexity of recursive equations. – Iteration: Time complexity of iteration can be found by finding the number of cycles being repeated inside the loop.
  • 10. • Usage: Usage of either of these techniques is a trade-off between time complexity and size of code. If time complexity is the point of focus, and number of recursive calls would be large, it is better to use iteration. However, if time complexity is not an issue and shortness of code is, recursion would be the way to go. – Recursion: Recursion involves calling the same function again, and hence, has a very small length of code. However, as we saw in the analysis, the time complexity of recursion can get to be exponential when there are a considerable number of recursive calls. Hence, usage of recursion is advantageous in shorter code, but higher time complexity. – Iteration: Iteration is repetition of a block of code. This involves a larger size of code, but the time complexity is generally lesser than it is for recursion.
  • 11. • Overhead: Recursion has a large amount of Overhead as compared to Iteration. – Recursion: Recursion has the overhead of repeated function calls, that is due to repetitive calling of the same function, the time complexity of the code increases manifold. – Iteration: Iteration does not involve any such overhead. • Infinite Repetition: Infinite Repetition in recursion can lead to CPU crash but in iteration, it will stop when memory is exhausted. – Recursion: In Recursion, Infinite recursive calls may occur due to some mistake in specifying the base condition, which on never becoming false, keeps calling the function, which may lead to system CPU crash. – Iteration: Infinite iteration due to mistake in iterator assignment or increment, or in the terminating condition, will lead to infinite loops, which may or may not lead to system errors, but will surely stop program execution any further.
  • 12. Fibonacci sequence: • A Fibonacci sequence is a sequence of integers which first two terms are 0 and 1 and all other terms of the sequence are obtained by adding their preceding two numbers. • For example: 0, 1, 1, 2, 3, 5, 8, 13 and so on...
  • 13. example 1. def recur_fibo(n): 2. if n <= 1: 3. return n 4. else: 5. return(recur_fibo(n-1) + recur_fibo(n-2)) 6. # take input from the user 7. nterms = int(input("How many terms? ")) 8. # check if the number of terms is valid 9. if nterms <= 0: 10. print("Please enter a positive integer") 11. else: 12. print("Fibonacci sequence:") 13. for i in range(nterms): 14. print(recur_fibo(i))
  • 15. Recursion • The process in which a function calls itself directly or indirectly is called recursion and the corresponding function is called as recursive function. • Using recursive algorithm, certain problems can be solved quite easily. • Examples of such problems are Towers of Hanoi (TOH), Inorder/Preorder/Postorder Tree Traversals, DFS of Graph, etc.
  • 16. A Mathematical Interpretation • Let us consider a problem that a programmer have to determine the sum of first n natural numbers, there are several ways of doing that but the simplest approach is simply add the numbers starting from 1 to n. • So the function simply looks like, • approach(1) – Simply adding one by one • f(n) = 1 + 2 + 3 +……..+ n • but there is another mathematical approach of representing this, • approach(2) – Recursive adding • f(n) = 1 n=1 • f(n) = n + f(n-1) n>1 • There is a simple difference between the approach (1) and approach(2) and that is in approach(2) the function “ f( ) ” itself is being called inside the function, so this phenomenon is named as recursion and the function containing recursion is called recursive function, at the end this is a great tool in the hand of the programmers to code some problems in a lot easier and efficient way.
  • 17. Example # A Python 3 program to # demonstrate working of # recursion def printFun(test): if (test < 1): return else: print(test, end=" ") printFun(test-1) # statement 2 print(test, end=" ") return # Driver Code test = 3 printFun(test) • Output : • 3 2 1 1 2 3
  • 18. # Python code to implement Fibonacci series # Function for fibonacci def fib(n): # Stop condition if (n == 0): return 0 # Stop condition if (n == 1 or n == 2): return 1 # Recursion function else: return (fib(n - 1) + fib(n - 2)) # Driver Code # Initialize variable n. n = 5; print("Fibonacci series of 5 numbers is :",end=" ") # for loop to print the fiboancci series. for i in range(0,n): print(fib(i),end=" ") Output Fibonacci series of 5 numbers is: 0 1 1 2 3
  • 19. Tower of Hanoi • Tower of Hanoi is a mathematical puzzle where we have three rods and n disks. • The objective of the puzzle is to move the entire stack to another rod, obeying • The following simple rules: 1) Only one disk can be moved at a time. 2) Each move consists of taking the upper disk from one of the stacks and placing it on top of another stack i.e. a disk can only be moved if it is the uppermost disk on a stack. 3) No disk may be placed on top of a smaller disk.
  • 21. # Recursive Python function to solve tower of hanoi def TowerOfHanoi(n , from_rod, to_rod, aux_rod): if n == 1: print "Move disk 1 from rod",from_rod,"to rod",to_rod return TowerOfHanoi(n-1, from_rod, aux_rod, to_rod) print "Move disk",n,"from rod",from_rod,"to rod",to_rod TowerOfHanoi(n-1, aux_rod, to_rod, from_rod) # Driver code n = 4 TowerOfHanoi(n, 'A', 'C', 'B') # A, C, B are the name of rods
  • 22. Output: • Move disk 1 from rod A to rod B • Move disk 2 from rod A to rod C • Move disk 1 from rod B to rod C • Move disk 3 from rod A to rod B • Move disk 1 from rod C to rod A • Move disk 2 from rod C to rod B • Move disk 1 from rod A to rod B • Move disk 4 from rod A to rod C • Move disk 1 from rod B to rod C • Move disk 2 from rod B to rod A • Move disk 1 from rod C to rod A • Move disk 3 from rod B to rod C • Move disk 1 from rod A to rod B • Move disk 2 from rod A to rod C • Move disk 1 from rod B to rod C
  • 23. Search Algorithms in Python • Searching for data stored in different data structures is a crucial part of pretty much every single application. • There are many different algorithms available to utilize when searching, and each have different implementations and rely on different data structures to get the job done. • Being able to choose a specific algorithm for a given task is a key skill for developers and can mean the difference between a fast, reliable and stable application and an application that crumbles from a simple request.
  • 24. • Membership Operators • Linear Search • Binary Search • Jump Search • Fibonacci Search • Exponential Search • Interpolation Search
  • 25. Membership Operators • Algorithms develop and become optimized over time as a result of constant evolution and the need to find the most efficient solutions for underlying problems in different domains. • One of the most common problems in the domain of Computer Science is searching through a collection and determining whether a given object is present in the collection or not. • Almost every programming language has its own implementation of a basic search algorithm, usually as a function which returns a Boolean value of True or False when an item is found in a given collection of items. • In Python, the easiest way to search for an object is to use Membership Operators - named that way because they allow us to determine whether a given object is a member in a collection.
  • 26. • These operators can be used with any iterable data structure in Python, including Strings, Lists, and Tuples. • in - Returns True if the given element is a part of the structure. • not in - Returns True if the given element is not a part of the structure. >>> 'apple' in ['orange', 'apple', 'grape'] True >>> 't' in 'stackabuse' True >>> 'q' in 'stackabuse' False >>> 'q' not in 'stackabuse' True
  • 27. • Membership operators suffice when all we need to do is find whether a substring exists within a given string, or determine whether two Strings, Lists, or Tuples intersect in terms of the objects they hold. • In most cases we need the position of the item in the sequence, in addition to determining whether or not it exists; membership operators do not meet this requirement. • There are many search algorithms that don't depend on built-in operators and can be used to search for values faster and/or more efficiently. • In addition, they can yield more information, such as the position of the element in the collection, rather than just being able to determine its existence.
  • 28. Linear Search • Linear search is one of the simplest searching algorithms, and the easiest to understand. • We can think of it as a ramped-up version of our own implementation of Python's in operator. • The algorithm consists of iterating over an array and returning the index of the first occurrence of an item once it is found: def LinearSearch(lys, element): for i in range (len(lys)): if lys[i] == element: return i return -1 So if we use the function to compute: >>> print(LinearSearch([1,2,3,4,5,2,1], 2)) Upon executing the code, we're greeted with: 1
  • 29. • This is the index of the first occurrence of the item we are searching for - keeping in mind that Python indexes are 0-based. • The time complexity of linear search is O(n), meaning that the time taken to execute increases with the number of items in our input list lys. • Linear search is not often used in practice, because the same efficiency can be achieved by using inbuilt methods or existing operators, and it is not as fast or efficient as other search algorithms. • Linear search is a good fit for when we need to find the first occurrence of an item in an unsorted collection because unlike most other search algorithms, it does not require that a collection be sorted before searching begins.
  • 30. • Problem: Given an array arr[] of n elements, write a function to search a given element x in arr[]. • Examples : • Input : arr[] = {10, 20, 80, 30, 60, 50, 110, 100, 130, 170} x = 110; • Output : 6 • Element x is present at index 6 • Input : arr[] = {10, 20, 80, 30, 60, 50, 110, 100, 130, 170} x = 175; • Output : -1 • Element x is not present in arr[].
  • 31. • A simple approach is to do a linear search, i.e Start from the leftmost element of arr[] and one by one compare x with each element of arr[] • If x matches with an element, return the index. • If x doesn’t match with any of elements, return - 1.
  • 32. # Python3 code to linearly search x in arr[]. # If x is present then return its location, # otherwise return -1 def search(arr, n, x): for i in range(0, n): if (arr[i] == x): return i return -1 # Driver Code arr = [2, 3, 4, 10, 40] x = 10 n = len(arr) # Function call result = search(arr, n, x) if(result == -1): print("Element is not present in array") else: print("Element is present at index", result) Output Element is present at index 3
  • 33. Binary Search • Binary search follows a divide and conquer methodology. • It is faster than linear search but requires that the array be sorted before the algorithm is executed. • Assuming that we're searching for a value val in a sorted array, the algorithm compares val to the value of the middle element of the array, which we'll call mid. • If mid is the element we are looking for (best case), we return its index. • If not, we identify which side of mid val is more likely to be on based on whether val is smaller or greater than mid, and discard the other side of the array. • We then recursively or iteratively follow the same steps, choosing a new value for mid, comparing it with val and discarding half of the possible matches in each iteration of the algorithm.
  • 34. • The binary search algorithm can be written either recursively or iteratively. • Recursion is generally slower in Python because it requires the allocation of new stack frames. • Since a good search algorithm should be as fast and accurate as possible, let's consider the iterative implementation of binary search: def BinarySearch(lys, val): first = 0 last = len(lys)-1 index = -1 while (first <= last) and (index == -1): mid = (first+last)//2 if lys[mid] == val: index = mid else: if val<lys[mid]: last = mid -1 else: first = mid +1 return index
  • 35. If we use the function to compute: >>> BinarySearch([10,20,30,40,50], 20) We get the result: 1 Which is the index of the value that we are searching for. • The action that the algorithm performs next in each iteration is one of several possibilities: • Returning the index of the current element • Searching through the left half of the array • Searching through the right half of the array • We can only pick one possibility per iteration, and our pool of possible matches gets divided by two in each iteration. This makes the time complexity of binary search O(log n).
  • 36. • One drawback of binary search is that if there are multiple occurrences of an element in the array, it does not return the index of the first element, but rather the index of the element closest to the middle: • >>> print(BinarySearch([4,4,4,4,4], 4)) • Running this piece of code will result in the index of the middle element: • 1 • For comparison performing a linear search on the same array would return: • 0 • Which is the index of the first element. However, we cannot categorically say that binary search does not work if an array contains the same element twice - it can work just like linear search and return the first occurrence of the element in some cases. • If we perform binary search on the array [1,2,3,4,4,5] for instance, and search for 4, we would get 3 as the result. • Binary search is quite commonly used in practice because it is efficient and fast when compared to linear search. • However, it does have some shortcomings, such as its reliance on the // operator. • There are many other divide and conquer search algorithms that are derived from binary search, let's examine a few of those next.
  • 37. Fibonacci Search • Fibonacci search is another divide and conquer algorithm which bears similarities to both binary search and jump search. • It gets its name because it uses Fibonacci numbers to calculate the block size or search range in each step. • Fibonacci numbers start with zero and follow the pattern 0, 1, 1, 2, 3, 5, 8, 13, 21... where each element is the addition of the two numbers that immediately precede it. • The algorithm works with three Fibonacci numbers at a time. Let's call the three numbers fibM, fibM_minus_1, and fibM_minus_2 where fibM_minus_1 and fibM_minus_2 are the two numbers immediately before fibM in the sequence: • fibM = fibM_minus_1 + fibM_minus_2 • We initialize the values to 0,1, and 1 or the first three numbers in the Fibonacci sequence to avoid getting an index error in the case where our search array lys contains a very small number of items. • Then we choose the smallest number of the Fibonacci sequence that is greater than or equal to the number of elements in our search array lys, as the value of fibM, and the two Fibonacci numbers immediately before it as the values of fibM_minus_1 and fibM_minus_2.
  • 38. • While the array has elements remaining and the value of fibM is greater than one, we: • Compare val with the value of the block in the range up to fibM_minus_2, and return the index of the element if it matches. • If the value is greater than the element we are currently looking at, we move the values of fibM, fibM_minus_1 and fibM_minus_2 two steps down in the Fibonacci sequence, and reset the index to the index of the element. • If the value is less than the element we are currently looking at, we move the values of fibM, fibM_minus_1 and fibM_minus_2 one step down in the Fibonacci sequence.
  • 39. • Let's take a look at the Python implementation of this algorithm: def FibonacciSearch(lys, val): fibM_minus_2 = 0 fibM_minus_1 = 1 fibM = fibM_minus_1 + fibM_minus_2 while (fibM < len(lys)): fibM_minus_2 = fibM_minus_1 fibM_minus_1 = fibM fibM = fibM_minus_1 + fibM_minus_2 index = -1; while (fibM > 1): i = min(index + fibM_minus_2, (len(lys)-1)) if (lys[i] < val): fibM = fibM_minus_1 fibM_minus_1 = fibM_minus_2 fibM_minus_2 = fibM - fibM_minus_1 index = i elif (lys[i] > val): fibM = fibM_minus_2 fibM_minus_1 = fibM_minus_1 - fibM_minus_2 fibM_minus_2 = fibM - fibM_minus_1 else : return i if(fibM_minus_1 and index < (len(lys)-1) and lys[index+1] == val): return index+1; return -1
  • 40. • If we use the FibonacciSearch function to compute: >>> print(FibonacciSearch([1,2,3,4,5,6,7,8,9,10,11], 6)) • Let's take a look at the step-by-step process of this search: • Determining the smallest Fibonacci number greater than or equal to the length of the list as fibM; in this case, the smallest Fibonacci number meeting our requirements is 13. • The values would be assigned as: o fibM = 13 o fibM_minus_1 = 8 o fibM_minus_2 = 5 o index = -1
  • 41. • Next, we check the element lys[4] where 4 is the minimum of -1+5 . Since the value of lys[4] is 5, which is smaller than the value we are searching for, we move the Fibonacci numbers one step down in the sequence, making the values: o fibM = 8 o fibM_minus_1 = 5 o fibM_minus_2 = 3 o index = 4 • Next, we check the element lys[7] where 7 is the minimum of 4+3. Since the value of lys[7] is 8, which is greater than the value we are searching for, we move the Fibonacci numbers two steps down in the sequence. o fibM = 3 o fibM_minus_1 = 2 o fibM_minus_2 = 1 o index = 4 • Now we check the element lys[5] where 5 is the minimum of 4+1 . The value of lys[5] is 6, which is the value we are searching for!
  • 42. • The result, as expected is:5 • The time complexity for Fibonacci search is O(log n); the same as binary search. • This means the algorithm is faster than both linear search and jump search in most cases. • Fibonacci search can be used when we have a very large number of elements to search through, and we want to reduce the inefficiency associated with using an algorithm which relies on the division operator. • An additional advantage of using Fibonacci search is that it can accommodate input arrays that are too large to be held in CPU cache or RAM, because it searches through elements in increasing step sizes, and not in a fixed size.
  • 43. Jump Search • Jump Search is similar to binary search in that it works on a sorted array, and uses a similar divide and conquer approach to search through it. • It can be classified as an improvement of the linear search algorithm since it depends on linear search to perform the actual comparison when searching for a value. • Given a sorted array, instead of searching through the array elements incrementally, we search in jumps. • So in our input list lys, if we have a jump size of jump our algorithm will consider elements in the order lys[0], lys[0+jump], lys[0+2jump], lys[0+3jump] and so on. • With each jump, we store the previous value we looked at and its index. When we find a set of values where lys[i]<element<lys[i+jump], we perform a linear search with lys[i] as the left-most element and lys[i+jump] as the right-most element in our search set:
  • 44. import math def JumpSearch (lys, val): length = len(lys) jump = int(math.sqrt(length)) left, right = 0, 0 while left < length and lys[left] <= val: right = min(length - 1, left + jump) if lys[left] <= val and lys[right] >= val: break left += jump; if left >= length or lys[left] > val: return -1 right = min(length - 1, right) i = left while i <= right and lys[i] <= val: if lys[i] == val: return i i += 1 return -1
  • 45. • Since this is a complex algorithm, let's consider the step-by- step computation of jump search with this input: • >>> print(JumpSearch([1,2,3,4,5,6,7,8,9], 5)) • Jump search would first determine the jump size by computing math.sqrt(len(lys)). Since we have 9 elements, the jump size would be √9 = 3. • Next, we compute the value of the right variable, which is the minimum of the length of the array minus 1, or the value of left+jump, which in our case would be 0+3= 3. Since 3 is smaller than 8 we use 3 as the value of right. • Now we check whether our search element, 5, is between lys[0] and lys[3]. Since 5 is not between 1 and 4, we move on. • Next, we do the calculations again and check whether our search element is between lys[3] and lys[6], where 6 is 3+jump. Since 5 is between 4 and 7, we do a linear search on the elements between lys[3] and lys[6] and return the index of our element as:4
  • 46. • The time complexity of jump search is O(√n), where √n is the jump size, and n is the length of the list, placing jump search between the linear search and binary search algorithms in terms of efficiency. • The single most important advantage of jump search when compared to binary search is that it does not rely on the division operator (/). • In most CPUs, using the division operator is costly when compared to other basic arithmetic operations (addition, subtraction, and multiplication), because the implementation of the division algorithm is iterative. • The cost by itself is very small, but when the number of elements to search through is very large, and the number of division operations that we need to perform increases, the cost can add up incrementally. • Therefore jump search is better than binary search when there is a large number of elements in a system where even a small increase in speed matters.
  • 47. • Subscribe to our Newsletter • Get occassional tutorials, guides, and reviews in your inbox. No spam ever. Unsubscribe at any time. • Subscribe • To make jump search faster, we could use binary search or another internal jump search to search through the blocks, instead of relying on the much slower linear search.
  • 48. Exponential Search • Exponential search is another search algorithm that can be implemented quite simply in Python, compared to jump search and Fibonacci search which are both a bit complex. • It is also known by the names galloping search, doubling search and Struzik search. • Exponential search depends on binary search to perform the final comparison of values. • The algorithm works by: • Determining the range where the element we're looking for is likely to be • Using binary search for the range to find the exact index of the item
  • 49. The Python implementation of the exponential search algorithm is: def ExponentialSearch(lys, val): if lys[0] == val: return 0 index = 1 while index < len(lys) and lys[index] <= val: index = index * 2 return BinarySearch( arr[:min(index, len(lys))], val) If we use the function to find the value of: >>> print(ExponentialSearch([1,2,3,4,5,6,7,8],3))
  • 50. • The algorithm works by: • Checking whether the first element in the list matches the value we are searching for - since lys[0] is 1 and we are searching for 3, we set the index to 1 and move on. • Going through all the elements in the list, and while the item at the index'th position is less than or equal to our value, exponentially increasing the value of index in multiples of two: o index = 1, lys[1] is 2, which is less than 3, so the index is multiplied by 2 and set to 2. o index = 2, lys[2]is 3, which is equal to 3, so the index is multiplied by 2 and set to 4. o index = 4, lys[4] is 5, which is greater than 3; the loop is broken at this point. • It then performs a binary search by slicing the list; arr[:4]. In Python, this means that the sub list will contain all elements up to the 4th element, so we're actually calling:
  • 51. • >>> Binary Search([1,2,3,4], 3) • which would return:2 • Which is the index of the element we are searching for in both the original list, and the sliced list that we pass on to the binary search algorithm. • Exponential search runs in O(log i) time, where i is the index of the item we are searching for. • In its worst case, the time complexity is O(log n), when the last item is the item we are searching for (n being the length of the array). • Exponential search works better than binary search when the element we are searching for is closer to the beginning of the array. • In practice, we use exponential search because it is one of the most efficient search algorithms for unbounded or infinite arrays.
  • 52. Interpolation Search • Interpolation search is another divide and conquer algorithm, similar to binary search. • Unlike binary search, it does not always begin searching at the middle. Interpolation search calculates the probable position of the element we are searching for using the formula: • index = low + [(val-lys[low])*(high-low) / (lys[high]-lys[low])] • Where the variables are: • lys - our input array • val - the element we are searching for • index - the probable index of the search element. This is computed to be a higher value when val is closer in value to the element at the end of the array (lys[high]), and lower when val is closer in value to the element at the start of the array (lys[low]) • low - the starting index of the array • high - the last index of the array
  • 53. • The algorithm searches by calculating the value of index: • If a match is found (when lys[index] == val), the index is returned • If the value of val is less than lys[index], the value for the index is re-calculated using the formula for the left sub-array • If the value of val is greater than lys[index], the value for the index is re-calculated using the formula for the right sub-array
  • 54. Let's go ahead and implement the Interpolation search using Python: def InterpolationSearch(lys, val): low = 0 high = (len(lys) - 1) while low <= high and val >= lys[low] and val <= lys[high]: index = low + int(((float(high - low) / ( lys[high] - lys[low])) * ( val - lys[low]))) if lys[index] == val: return index if lys[index] < val: low = index + 1; else: high = index - 1; return -1 If we use the function to compute: >>> print(InterpolationSearch([1,2,3,4,5,6,7,8], 6))
  • 55. • Our initial values would be: • val = 6, • low = 0, • high = 7, • lys[low] = 1, • lys[high] = 8, • index = 0 + [(6-1)*(7-0)/(8-1)] = 5 • Since lys[5] is 6, which is the value we are searching for, we stop executing and return the result: 5 • If we have a large number of elements, and our index cannot be computed in one iteration, we keep on re-calculating values for index after adjusting the values of high and low in our formula. • The time complexity of interpolation search is O(log log n) when values are uniformly distributed. • If values are not uniformly distributed, the worst-case time complexity is O(n), the same as linear search. • Interpolation search works best on uniformly distributed, sorted arrays. • Whereas binary search starts in the middle and always divides into two, interpolation search calculates the likely position of the element and checks the index, making it more likely to find the element in a smaller number of iterations.
  • 56. Why Use Python For Searching? • Python is highly readable and efficient when compared to older programming languages like Java, Fortran, C, C++ etc. One key advantage of using Python for implementing search algorithms is that you don't have to worry about casting or explicit typing. • In Python, most of the search algorithms we discussed will work just as well if we're searching for a String. • Keep in mind that we do have to make changes to the code for algorithms which use the search element for numeric calculations, like the interpolation search algorithm. • Python is also a good place to start if you want to compare the performance of different search algorithms for your dataset; building a prototype in Python is easier and faster because you can do more with fewer lines of code. • To compare the performance of our implemented search algorithms against a dataset, we can use the time library in Python: import time start = time.time() # call the function here end = time.time() print(start-end)
  • 57. Conclusion • There are many possible ways to search for an element within a collection. • In this, we attempted to discuss a few search algorithms and their implementations in Python. • Choosing which algorithm to use is based on the data you have to search through; your input array, which we've called lys in all our implementations. • If you want to search through an unsorted array or to find the first occurrence of a search variable, the best option is linear search. • If you want to search through a sorted array, there are many options of which the simplest and fastest method is binary search. • If you have a sorted array that you want to search through without using the division operator, you can use either jump search or Fibonacci search. • If you know that the element you're searching for is likely to be closer to the start of the array, you can use exponential search. • If your sorted array is also uniformly distributed, the fastest and most efficient search algorithm to use would be interpolation search.
  • 58. Python program to implement Binary Search Algorithm • Binary Search as the name suggests binary, here the list is divided into halves and then searched in each half. • One notable thing about this binary search is that the list should be sorted first before executing the algorithm. • The list is divided into two halves by the index, find the mid element of the list and then start to mid-1 is one list and mid+1 to end is another list, check if the element is mid, greater than it or less than it and return the appropriate position of the key element to be found. • Binary Search Algorithm – Code Visualization • Time Complexity • Best Case O(1) • Average Case O(log n) • Worst Case O(log n)
  • 59. Algorithm • Given a list L of n elements with values or records L0, L1, …, Ln-1, sorted in ascending order, and given key/target value K, binary search is used to find the index of K in L 1. Set a variable start to 0 and another variable end to n-1(size of the list) 2. if start > end break the algorithm, as it works only on sorted lists 3. calculate mid as (start + end)/2 4. if the key element is A. equal to mid, the search is done return position of mid B. greater than mid, set start to mid + 1 and repeat from step 2 C. less than mid, set end to mid - 1 and repeat from step 2
  • 60. Program __author__ = 'Avinash' def binary_sort(sorted_list, length, key): start = 0 end = length-1 while start <= end: mid = int((start + end)/2) if key == sorted_list[mid]: print("nEntered number %d is present at position: %d" % (key, mid)) return -1 elif key < sorted_list[mid]: end = mid - 1 elif key > sorted_list[mid]: start = mid + 1 print("nElement not found!") return -1 lst = [] size = int(input("Enter size of list: t")) for n in range(size): numbers = int(input("Enter any number: t")) lst.append(numbers) lst.sort() print('nnThe list will be sorted, the sorted list is:', lst) x = int(input("nEnter the number to search: ")) binary_sort(lst, size, x)
  • 63. Python - Sorting Algorithms • Sorting refers to arranging data in a particular format. Sorting algorithm specifies the way to arrange data in a particular order. • Most common orders are in numerical or lexicographical order. • The importance of sorting lies in the fact that data searching can be optimized to a very high level, if data is stored in a sorted manner. • Sorting is also used to represent data in more readable formats. Below we see five such implementations of sorting in python. • Bubble Sort • Merge Sort • Insertion Sort • Shell Sort • Selection Sort
  • 64. Bubble Sort • This simple sorting algorithm iterates over a list, comparing elements in pairs and swapping them until the larger elements "bubble up" to the end of the list, and the smaller elements stay at the "bottom". def bubblesort(list): # Swap the elements to arrange in order for iter_num in range(len(list)-1,0,-1): for idx in range(iter_num): if list[idx]>list[idx+1]: temp = list[idx] list[idx] = list[idx+1] list[idx+1] = temp list = [19,2,31,45,6,11,121,27] bubblesort(list) print(list) • When the above code is executed, it produces the following result − • [2, 6, 11, 19, 27, 31, 45, 121]
  • 65. Selection Sort • In selection sort we start by finding the minimum value in a given list and move it to a sorted list. • Then we repeat the process for each of the remaining elements in the unsorted list. • The next element entering the sorted list is compared with the existing elements and placed at its correct position. • So at the end all the elements from the unsorted list are sorted.
  • 66. def selection_sort(input_list): for idx in range(len(input_list)): min_idx = idx for j in range( idx +1, len(input_list)): if input_list[min_idx] > input_list[j]: min_idx = j # Swap the minimum value with the compared value input_list[idx], input_list[min_idx] = input_list[min_idx], input_list[idx] l = [19,2,31,45,30,11,121,27] selection_sort(l) print(l) • When the above code is executed, it produces the following result − • [2, 11, 19, 27, 30, 31, 45, 121]
  • 67. Python Program for Selection Sort • The selection sort algorithm sorts an array by repeatedly finding the minimum element (considering ascending order) from unsorted part and putting it at the beginning. • The algorithm maintains two subarrays in a given array. 1) The subarray which is already sorted. 2) Remaining subarray which is unsorted. • In every iteration of selection sort, the minimum element (considering ascending order) from the unsorted subarray is picked and moved to the sorted subarray.
  • 68. # Python program for implementation of Selection # Sort import sys A = [64, 25, 12, 22, 11] # Traverse through all array elements for i in range(len(A)): # Find the minimum element in remaining # unsorted array min_idx = i for j in range(i+1, len(A)): if A[min_idx] > A[j]: min_idx = j # Swap the found minimum element with # the first element A[i], A[min_idx] = A[min_idx], A[i] # Driver code to test above print ("Sorted array") for i in range(len(A)): print("%d" %A[i]),
  • 69. Merge Sort Merge sort first divides the array into equal halves and then combines them in a sorted manner. def merge_sort(unsorted_list): if len(unsorted_list) <= 1: return unsorted_list # Find the middle point and devide it middle = len(unsorted_list) // 2 left_list = unsorted_list[:middle] right_list = unsorted_list[middle:] left_list = merge_sort(left_list) right_list = merge_sort(right_list) return list(merge(left_list, right_list)) # Merge the sorted halves def merge(left_half,right_half): res = [] while len(left_half) != 0 and len(right_half) != 0: if left_half[0] < right_half[0]: res.append(left_half[0]) left_half.remove(left_half[0]) else: res.append(right_half[0]) right_half.remove(right_half[0]) if len(left_half) == 0: res = res + right_half else: res = res + left_half return res unsorted_list = [64, 34, 25, 12, 22, 11, 90] print(merge_sort(unsorted_list)) • When the above code is executed, it produces the following result − • [11, 12, 22, 25, 34, 64, 90]
  • 70. Python program for Merging of list elements • Sometimes, we require to merge some of the elements as single element in the list. • This is usually with the cases with character to string conversion. • This type of task is usually required in the development domain to merge the names into one element. • Let’s discuss certain ways in which this can be performed.
  • 71. • Method #1 : Using join() + List Slicing • The join function can be coupled with list slicing which can perform the task of joining each character in a range picked by the list slicing functionality. # Python3 code to demonstrate # merging list elements # using join() + list slicing # initializing list test_list = ['I', 'L', 'O', 'V', 'E', 'G', 'F', 'G'] # printing original list print ("The original list is : " + str(test_list)) # using join() + list slicing # merging list elements test_list[5 : 8] = [''.join(test_list[5 : 8])] # printing result print ("The list after merging elements : " + str(test_list)) Output: The original list is : ['I', 'L', 'O', 'V', 'E', 'G', 'F', 'G'] The list after merging elements : ['I', 'L', 'O', 'V', 'E', 'GFG']
  • 72. • Method #2 : Using reduce() + lambda + list slicing • The task of joining each element in a range is performed by reduce function and lambda. Reduce function performs the task for each element in the range which is defined by the lambda function. • It works with Python2 only # Python code to demonstrate # merging list elements # using reduce() + lambda + list slicing # initializing list test_list = ['I', 'L', 'O', 'V', 'E', 'G', 'F', 'G'] # printing original list print ("The original list is : " + str(test_list)) # using reduce() + lambda + list slicing # merging list elements test_list[5 : 8] = [reduce(lambda i, j: i + j, test_list[5 : 8])] # printing result print ("The list after merging elements : " + str(test_list)) • Output: • The original list is : ['I', 'L', 'O', 'V', 'E', 'G', 'F', 'G'] • The list after merging elements : ['I', 'L', 'O', 'V', 'E', 'GFG']
  • 73. Python Program for Merge Sort • Merge Sort is a Divide and Conquer algorithm. • It divides input array in two halves, calls itself for the two halves and then merges the two sorted halves. • The merge() function is used for merging two halves. • The merge(arr, l, m, r) is key process that assumes that arr[l..m] and arr[m+1..r] are sorted and merges the two sorted sub-arrays into one.
  • 74. # Python program for implementation of MergeSort # Merges two subarrays of arr[]. # First subarray is arr[l..m] # Second subarray is arr[m+1..r] def merge(arr, l, m, r): n1 = m - l + 1 n2 = r- m # create temp arrays L = [0] * (n1) R = [0] * (n2) # Copy data to temp arrays L[] and R[] for i in range(0 , n1): L[i] = arr[l + i] for j in range(0 , n2): R[j] = arr[m + 1 + j] # Merge the temp arrays back into arr[l..r] i = 0 # Initial index of first subarray j = 0 # Initial index of second subarray k = l # Initial index of merged subarray
  • 75. while i < n1 and j < n2 : if L[i] <= R[j]: arr[k] = L[i] i += 1 else: arr[k] = R[j] j += 1 k += 1 # Copy the remaining elements of L[], if there # are any while i < n1: arr[k] = L[i] i += 1 k += 1 # Copy the remaining elements of R[], if there # are any while j < n2: arr[k] = R[j] j += 1 k += 1
  • 76. # l is for left index and r is right index of the # sub-array of arr to be sorted def mergeSort(arr,l,r): if l < r: # Same as (l+r)/2, but avoids overflow for # large l and h m = (l+(r-1))/2 # Sort first and second halves mergeSort(arr, l, m) mergeSort(arr, m+1, r) merge(arr, l, m, r) # Driver code to test above arr = [12, 11, 13, 5, 6, 7] n = len(arr) print ("Given array is") i in range(n): print ("%d" %arr[i]), mergeSort(arr,0,n-1) print ("nnSorted array is") for i in range(n): print ("%d" %arr[i]),
  • 77. Output • Given array is • 12 11 13 5 6 7 • Sorted array is • 5 6 7 11 12 13
  • 78. Insertion Sort • Insertion sort involves finding the right place for a given element in a sorted list. So in beginning we compare the first two elements and sort them by comparing them. • Then we pick the third element and find its proper position among the previous two sorted elements. • This way we gradually go on adding more elements to the already sorted list by putting them in their proper position. def insertion_sort(InputList): for i in range(1, len(InputList)): j = i-1 nxt_element = InputList[i] # Compare the current element with next one while (InputList[j] > nxt_element) and (j >= 0): InputList[j+1] = InputList[j] j=j-1 InputList[j+1] = nxt_element list = [19,2,31,45,30,11,121,27] insertion_sort(list) print(list) • When the above code is executed, it produces the following result − • [2, 11, 19, 27, 30, 31, 45, 121]