0% found this document useful (0 votes)
31 views

Rift Valley University: Department of Computer Science Algorithm Analysis Assignment

This document summarizes three search algorithms: binary search, interpolation search, and jump search. Binary search works on a sorted list by repeatedly dividing the search interval in half and focusing on either the upper or lower half based on whether the search key is less than or greater than the middle element. Interpolation search is an improvement on binary search for uniformly distributed data. It may search in different locations depending on how close the search key is to the beginning or end of the list. Jump search checks elements by "jumping ahead" fixed steps or skipping elements, rather than searching all elements linearly. It searches indexes at fixed intervals until it finds the interval containing the key, then searches linearly within that interval.

Uploaded by

Mercy Jorge
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
31 views

Rift Valley University: Department of Computer Science Algorithm Analysis Assignment

This document summarizes three search algorithms: binary search, interpolation search, and jump search. Binary search works on a sorted list by repeatedly dividing the search interval in half and focusing on either the upper or lower half based on whether the search key is less than or greater than the middle element. Interpolation search is an improvement on binary search for uniformly distributed data. It may search in different locations depending on how close the search key is to the beginning or end of the list. Jump search checks elements by "jumping ahead" fixed steps or skipping elements, rather than searching all elements linearly. It searches indexes at fixed intervals until it finds the interval containing the key, then searches linearly within that interval.

Uploaded by

Mercy Jorge
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 10

Rift valley University

Department of computer science


Algorithm Analysis Assignment

Mihiret jorgi
Id=0029/16
Regular

Submmited date 05/05/20


Submitted to mr.abebe

1.Define the term Algorithm, Algorithm Analysis, space and time complexity with
example

 Algorithm:-An algorithm is a finite list of instructions, most often used in solving problems or


performing tasks. You may have heard the term used in some fancy context about a genius using an
algorithm to do something highly complex, usually in programming.

Example:- One of the simplest algorithms is to find the largest number in a list of numbers of random order.
Finding the solution requires looking at every number in the list. From this follows a simple algorithm, which
can be stated in a high-level description in English prose, as:
High-level description:
If there are no numbers in the set then there is no highest number.
Assume the first number in the set is the largest number in the set.
For each remaining number in the set: if this number is larger than the current largest number, consider this
number to be the largest number in the set.
When there are no numbers left in the set to iterate over, consider the current largest number to be the largest
number of the set.
(Quasi-)formal description: Written in prose but much closer to the high-level language of a computer
program, the following is the more formal coding of the algorithm in pseudocode or pidgin code:

Algorithm LargestNumber
Input: A list of numbers L.
Output: The largest number in the list L.
if L.size = 0 return null
largest ← L[0]
for each item in L, do
if item > largest, then
largest ← item
return largest
"←" denotes assignment. For instance, "largest ← item" means that the value of largest changes to the value
of item.
"return" terminates the algorithm and outputs the following value.

 Algorithm analysis:- is a field of computer science that is dedicated to understanding the complexity of


algorithms. Algorithms are generally defined as processes that perform a series of operations to an
end. Algorithms can be expressed in many ways, in flow charts, a natural language, and computer
programminglanguages. Algorithms are used in mathematics, computing and linguistics, but a most
common use is in computers to do calculations or process data. Algorithm analysis deals with
algorithms written in computer programming languages, which are based on mathematical formalism

An algorithm is essentially a set of instructions for a computer to perform a calculation in a certain way. 

Example:- a computer would use an algorithm to calculate an employee’s paycheck. In order for the
computer to perform the calculations, it needs appropriate data put into the system, such as the
employee’s wage rate and number of hours worked.
More than one algorithm might work to perform the same operation, but some algorithms use more
memory and take longer to perform than others. Also, how do we know how well algorithms work in
general, given differences between computers and data inputs? This is where algorithm analysis comes
in.
 One way to test an algorithm is to run a computer program and see how well it works. The
problem with this approach is that it only tells us how well the algorithm works with a
particular computer and set of inputs. The purpose of algorithm analysis is to test and then draw
conclusions about how well a particular algorithm works in general. This would be very difficult
and time consuming to do on individual computers, so researchers devise models of computer
functioning to test algorithms.
 In general, algorithm analysis is most concerned with finding out how much time a program
takes to run, and how much memory storage space it needs to execute the program. In
particular, computer scientists use algorithm analysis to determine how the data imputed into a
program affects its total running time, how much memory space the computer needs for
program data, how much space the program’s code takes in the computer, whether an algorithm
produces correct calculations, how complex a program is, and how well it deals with unexpected
results.
 Time complexity:-The time complexity of an algorithm is the amount of time taken by the algorithm to
complete its process as a function of its input length, n. The time complexity of an algorithm is
commonly expressed using asymptotic notations:

Big O - OO(n),

Big Theta - \ThetaΘ(n)

Big Omega - \OmegaΩ(n)

Time Complexity is represented as a function that portrays the amount of time is necessary for an algorithm to
run until complete. In the aforementioned scenario we are looking at time complexity can be represented in the
approach you take in finding out who has the book. However, in computer science, this typically means how
much time does the processes and data structures in our codebase / functions take up to achieve their goal.

Example:-
Let us consider a model machine which has the following specifications:
–Single processor
–32 bit
–Sequential execution
–1 unit time for arithmetic and logical operations
–1 unit time for assignment and return statements
1.Sum of 2 numbers :
Pseudocode:
Sum(a,b){
return a+b //Takes 2 unit of time(constant) one for arithmetic operation and one for return.(as per above
conventions) cost=2 no of times=1
}
Tsum= 2 = C =O(1)
2.Sum of all elements of a list :
Pseudocode:
list_Sum(A,n){//A->array and n->number of elements in the array
total =0 // cost=1 no of times=1
for i=0 to n-1 // cost=2 no of times=n+1 (+1 for the end false condition)
sum = sum + A[i] // cost=2 no of times=n
return sum // cost=1 no of times=1
}
Tsum=1 + 2 * (n+1) + 2 * n + 1 = 4n + 1 =C1 * n + C2 = O(n)

 Space complexity:-The space complexity of an algorithm is the amount of space (or memory) taken by
the algorithm to run as a function of its input length, n. Space complexity includes both auxiliary
space and space used by the input.
 Auxiliary space is the temporary or extra space used by the algorithm while it is being executed. Space
complexity of an algorithm is commonly expressed using Big O (O(n))(O(n)) notation.
 Many algorithms have inputs that can vary in size, e.g., an array. In such cases, the space complexity will
depend on the size of the input and hence, cannot be less that O(n)O(n) for an input of size nn. For fixed-
size inputs, the complexity will be a constant O(1)O(1).
 Space complexity is the total amount of memory space used by an algorithm/program including the
space of input values for execution. So to find space complexity, it is enough to calculate the space
occupied by the variables used in an algorithm/program.
But often, people confuse Space complexity with Auxiliary space. Auxiliary space is just a temporary or
extra space and it is not the same as space complexity. In simpler terms,

Space Complexity = Auxiliary space + Space use by input values

 The best algorithm/program should have the lease space complexity. The lesser the space used, the
faster it executes.

Space Complexity is represented as a function that portrays the amount of space is necessary for an algorithm to
run until complete. In the aforementioned scenario we are looking at you can think of space complexity as the
number of rooms you need to figure out who has the book. However, in computer science, this typically means
how much memory does the processes and data structures in our codebase / functions take up to achieve their
goal.

Example:-

#include<stdio.h>

int main()

{
int a = 5, b = 5, c;
c = a + b;
printf("%d", c);
}
Output:
10
space complexity of an algorithm

2.List and describe the following algorithms


A.Three search algorithm
1.Binary search:-Binary search is a more efficient search algorithm which relies on the elements in the list
being sorted. We apply the same search process to progressively smaller sub-lists ofthe original list, starting
with the whole list and approximately halving the search area every time.

Binary Search: Search a sorted array by repeatedly dividing the search interval in half. Begin with an interval
covering the whole array. If the value of the search key is less than the item in the middle of the interval, narrow
the interval to the lower half. Otherwise narrow it to the upper half. Repeatedly check until the value is found or
the interval is empty.

2.Interpolation Search:-The Interpolation Search is an improvement over Binary Search for instances, where


the values in a sorted array are uniformly distributed. Binary Search always goes to the middle element to
check. On the other hand, interpolation search may go to different locations according to the value of the key
being searched. For example, if the value of the key is closer to the last element, interpolation search is likely to
start search toward the end side.

3. Jump Search
Like Binary Search, Jump Search is a searching algorithm for sorted arrays. The basic idea is to check fewer
elements (than linear search) by jumping ahead by fixed steps or skipping some elements in place of
searching all elements.
For example, suppose we have an array arr[] of size n and block (to be jumped) size m. Then we search at the
indexes arr[0], arr[m], arr[2m]…..arr[km] and so on. Once we find the interval (arr[km] < x < arr[(k+1)m]),
we perform a linear search operation from the index km to find the element x.
Let’s consider the following array: (0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, 233, 377, 610). Length of the
array is 16. Jump search will find the value of 55 with the following steps assuming that the block size to be
jumped is 4.
STEP 1: Jump from index 0 to index 4;
STEP 2: Jump from index 4 to index 8;
STEP 3: Jump from index 8 to index 12;
STEP 4: Since the element at index 12 is greater than 55 we will jump back a step to come to index 8.
STEP 5: Perform linear search from index 8 to get the element 55.
B.four sorting algorithm
1.Merge Sort:-Mergesort is a comparison-based algorithm that focuses on how to merge together two pre-
sorted arrays such that the resulting array is also sorted.

2.Insertion Sort:-Insertion sort is a comparison-based algorithm that builds a final sorted array one
element at a time. It iterates through an input array and removes one element per iteration, finds the place
the element belongs in the array, and then places it there.

3.Bubble sort is a comparison-based algorithm that compares each pair of elements in an array and swaps
them if they are out of order until the entire array is sorted. For each element in the list, the algorithm
compares every pair of elements.

4.Selection sort:-To order a given list using selection sort, we repeatedly select the smallest remaining
element and move it to the end of a growing sorted list.

3.Define the input case analysis for the algorithms on question 2(best
case, average case and worst case analysis)

 Input case in searching algorithm


The worst case complexity is O(n), sometimes known an O(n) search
Time taken to search elements keep increasing as the number of elements are increased.
A binary search however, cut down your search to half as soon as you find middle of a sorted list.
The middle element is looked to check if it is greater than or less than the value to be searched.
Accordingly, search is done to either half of the given list
To find the position to be searched, it uses following formula.

Best, Worst and Average Cases

 Worst case:-an upper bound on the number of basic operations that will be performed
 Best case:-a lower bound on the number of basic operations that will be performed
 Average case:-the average number of basic operations that will be performed over all the problems of a
given size

Binary search runs in logarithmic time in the worst case, making {\displaystyle O(\log n)} O(\log n)
comparisons, where {\displaystyle n} n is the number of elements in the array, the {\displaystyle O} O is Big
O notation, and {\displaystyle \log } \log is the logarithm.[6] Binary search is faster than linear search except
for small arrays. However, the array must be sorted first to be able to apply binary search. There are
specialized data structures designed for fast searching, such as hash tables, that can be searched more
efficiently than binary search. However, binary search can be used to solve a wider range of problems, such
as finding the next-smallest or next-largest element in the array relative to the target even if it is absent from
the array.
There are numerous variations of binary search. In particular, fractional cascading speeds up binary searches
for the same value in multiple arrays. Fractional cascading efficiently solves a number of search problems in
computational geometry and in numerous other fields. Exponential search extends binary search to
unbounded lists. The binary search tree and B-tree data structures are based on binary search.
We can have three cases to analyze an algorithm:
1) Worst Case
2) Average Case
3) Best Case
 Worst Case Analysis (Usually Done)
In the worst case analysis, we calculate upper bound on running time of an algorithm. We must know the
case that causes maximum number of operations to be executed. For Linear Search, the worst case happens
when the element to be searched (x in the above code) is not present in the array. When x is not present, the
search() functions compares it with all the elements of arr[] one by one. Therefore, the worst case time
complexity of linear search would be Θ(n).
 Average Case Analysis (Sometimes done)
In average case analysis, we take all possible inputs and calculate computing time for all of the inputs. Sum
all the calculated values and divide the sum by total number of inputs. We must know (or predict)
distribution of cases. For the linear search problem, let us assume that all cases are uniformly distributed
(including the case of x not being present in array). So we sum all the cases and divide the sum by (n+1).
Following is the value of average case time complexity.
Average Case Time =
analysis1 =
analysis2

= Θ(n)
 Best Case Analysis (Bogus)
In the best case analysis, we calculate lower bound on running time of an algorithm. We must know the case
that causes minimum number of operations to be executed. In the linear search problem, the best case occurs
when x is present at the first location. The number of operations in the best case is constant (not dependent
on n). So time complexity in the best case would be Θ(1)
Most of the times, we do worst case analysis to analyze algorithms. In the worst analysis, we guarantee an
upper bound on the running time of an algorithm which is good information.
The average case analysis is not easy to do in most of the practical cases and it is rarely done. In the average
case analysis, we must know (or predict) the mathematical distribution of all possible inputs.
The Best Case analysis is bogus. Guaranteeing a lower bound on an algorithm doesn’t provide any
information as in the worst case, an algorithm may take years to run.
 Input case in Sorting of algorithm
For some algorithms, all the cases are asymptotically same, i.e., there are no worst and best cases. For
example, Merge Sort. Merge Sort does Θ(nLogn) operations in all cases. Most of the other sorting algorithms
have worst and best cases. For example, in the typical implementation of Quick Sort (where pivot is chosen
as a corner element), the worst occurs when the input array is already sorted and the best occur when the
pivot elements always divide array in two halves. For insertion sort, the worst case occurs when the array is
reverse sorted and the best case occurs when the array is sorted in the same order as output.
In computer science, best, worst, and average cases of a given algorithm express what the resource usage is
at least, at most and on average, respectively. Usually the resource being considered is running time, i.e. time
complexity, but could also be memory or other resource. Best case is the function which performs the
minimum number of steps on input data of n elements. Worst case is the function which performs the
maximum number of steps on input data of size n. Average case is the function which performs an average
number of steps on input data of n elements.
In real-time computing, the worst-case execution time is often of particular concern since it is important to
know how much time might be needed in the worst case to guarantee that the algorithm will always finish on
time.
Average performance and worst-case performance are the most used in algorithm analysis. Less widely
found is best-case performance, but it does have uses: for example, where the best cases of individual tasks
are known, they can be used to improve the accuracy of an overall worst-case analysis. Computer scientists
use probabilistic analysis techniques, especially expected value, to determine expected running times.
The terms are used in other contexts; for example the worst- and best-case outcome of a planned-for
epidemic, worst-case temperature to which an electronic circuit element is exposed, etc.
REFERENCE

 http://www.geeksforgeeks.org
 http://whatis.techtarget.com
 http://en.m.wikipedia.org
 http://www.hackerearth.com
 http://www.studytonight.com
 http://realpython.com

You might also like