Cit310 Summary From Noungeeks
Cit310 Summary From Noungeeks
Algorithms are named for the 9th century Persian mathematician Al-
Khowarizmi. He wrote a treatise in Arabic in 825 AD, On Calculation
with Hindu Numerals. It was translated into Latin in the 12th century as
Algoritmi de numero Indorum, which title was likely intended to mean
"[Book by] Algoritmus on the numbers of the Indians", where
"Algoritmi" was the translator's rendition of the author's name in the
genitive case; but people misunderstanding the title treated Algoritmi as
a Latin plural and this led to the word "algorithm" (Latin algorismus)
coming to mean "calculation method".
Characteristics of Algorithms
Advantages of Algorithms
Disadvantages of Algorithms
Advantages of Pseudocode
Disadvantages of Pseudocode
Analysis of Algorithm
Or in other words, you should describe what you want to include in your
code in an English-like language for it to be more readable and
understandable before implementing it, which is nothing but the concept
of Algorithm.
So, the Design and Analysis of Algorithm talks about how to design
various algorithms and how to analyze them. After designing and
analyzing, choose the best algorithm that takes the least time and the
least memory and then implement it as a program in C.
We will be looking more on time rather than space because time is
instead a more limiting parameter in terms of the hardware. It is not
easy to take a computer and change its speed. So, if we are running an
algorithm on a particular platform, we are more or less stuck with the
performance that platform can give us in terms of speed.
1. Worst-case time complexity: For 'n' input size, the worst-case time
complexity can be defined as the maximum amount of time needed
by an algorithm to complete its execution. Thus, it is nothing but a
function defined by the maximum number of steps performed on an
instance having an input size of n.
2. Average case time complexity: For 'n' input size, the average-case
time complexity can be defined as the average amount of time needed
by an algorithm to complete its execution. Thus, it is nothing but a
function defined by the average number of steps performed on an
instance having an input size of n.
3. Best case time complexity: For 'n' input size, the best-case time
complexity can be defined as the minimum amount of time needed by
an algorithm to complete its execution. Thus, it is nothing but a
function defined by the minimum number of steps performed on an
instance having an input size of n.
Complexity of Algorithms
The term algorithm complexity measures how many steps are required
by the algorithm to solve the given problem. It evaluates the order of
count of operations executed by an algorithm as a function of input data
size.
Constant Complexity:
Logarithmic Complexity:
Linear Complexity:
Quadratic Complexity:
For example, for N number of elements, the steps are found to be in the
order of 3*N2/2.
Cubic Complexity:
Exponential Complexity:
First, they provide guidance for designing algorithms for new problems,
i.e., problems for which there is no known satisfactory algorithm.
Second, algorithms are the cornerstone of computer science. Every
science is interested in classifying its principal subject, and computer
science is no exception. Algorithm design techniques make it possible to
classify algorithms according to an underlying design idea; therefore,
they can serve as a natural way to both categorize and study algorithms.
Following are some standard algorithms that are of the Divide and
Conquer algorithms variety.
Closest Pair of Points The problem is to find the closest pair of points in
a set of points in x-y plane.
2. Greedy Technique:
Greedy Algorithm always makes the choice (greedy criteria) looks best at
the moment, to optimize a given objective.
The greedy algorithm doesn't always guarantee the optimal solution
however it generally produces a solution that is very close in value to the
optimal.
There are two main instances of recursion. The first is when recursion is
used as a technique in which a function makes one or more calls to itself.
The second is when a data structure uses smaller instances of the exact
same type of data structure when it represents itself.
Why use recursion?
Factorial Example
n! = n ⋅ (n−1) ⋅ (n−2) … 3 ⋅ 2 ⋅ 1
4!=4⋅ 3⋅ 2⋅ 1=24.
So how can we state this in a recursive manner? This is where the
concept of base case comes in.
4!=4⋅ 3!=24
n! = n ⋅ (n−1) !
Note, if n = 0, then n! = 1. This means the base case occurs once n=0,
the recursive cases are defined in the equation above. Whenever you are
trying to develop a recursive solution it is very important to think about
the base case, as your solution will need to return the base case once all
the recursive cases have been worked through. Let’s look at how we can
create the factorial function in Python:
def fact(n):
'''
Returns factorial of n (n!).
'''
BASE CASE! if n == 0:
return 1
Recursion!
else:
return n * fact(n-1)
2. A recursive algorithm must change its state and move toward the
base case.
Recurrence Relations
For Example, the Worst Case Running Time T(n) of the MERGE SORT
Procedures is described by the recurrence.
2T + θ (n) if n>1
2. Iteration Method.
3. Recursion Tree Method.
4. Master Method.
Time Complexities:
2. Only one extra space is required for holding the temporal variable.
Radix Sort
Radix sort is a sorting technique that sorts the elements digit to digit
based on radix. It works on integer numbers. To sort the elements of the
string type, we can use their hash value. This sorting algorithm makes no
comparison.
1. Fast when the keys are short i.e. when the range of the array elements
is less.
It is used for constructing a suffix array. (An array that contains all the
possible suffixes of a string in sorted order is called a suffix array.
Stability in Sorting
Stable sort algorithms sort equal elements in the same order that they
appear in the input. For example, in the card sorting example to the right,
the cards are being sorted by their rank, and their suit is being ignored.
This allows the possibility of multiple different correctly sorted versions
of the original list. Stable sorting algorithms choose one of these,
according to the following rule: if two items compare as equal (like the
two 5 cards), then their relative order will be preserved, i.e. if one comes
before the other in the input, it will come before the other in the output.
The sorting algorithm which will produce the first output will be known
as stable sorting algorithm because the original order of equal keys are
maintained, you can see that (4, 5) comes before (4,3) in the sorted order,
which was the original order i.e. in the given input, (4, 5) comes before
(4,3) .
On the other hand, the algorithm which produces second output will
know as an unstable sorting algorithm because the order of objects with
the same key is not maintained in the sorted order. You can see that in
the second output, the (4,3) comes before (4,5) which was not the case in
the original input.
Examples: The specific computer algorithms are based on the Divide &
Conquer approach:
2. Binary Search
4. Tower of Hanoi.
Dynamic Programming
The F(20) term will be calculated using the nth formula of the Fibonacci
series.
As we can observe in the above figure that F(20) is calculated as the sum
of F(19) and F(18).
The following are the steps that the dynamic programming follows:
The above five steps are the basic steps for dynamic programming. The
dynamic programming is applicable that are having properties such as:
Those problems that are having overlapping subproblems and optimal
substructures. Here, optimal substructure means that the solution of
optimization problems can be obtained by simply combining the optimal
solution of all the subproblems.
Top-down approach
Bottom-up approach
Top-down approach
1. It uses the recursion technique that occupies more memory in the call
stack. Sometimes when the recursion is too deep, the stack overflow
condition will occur.
Notations Used
In summary,
Deterministic algorithms are by far the most studied and familiar kind of
algorithm as well as one of the most practical, since they can be run on
real machines efficiently.
1. If it uses external state other than the input, such as user input, a
global variable, a hardware timer value, a random value, or stored
disk data.
Definition of P Problems
For example, Greedy method, D.P., given a graph G= (V, E) if there exists
any Hamiltonian cycle.
NP-hard Problems
If we can solve this problem in polynomial time, then we can solve all NP
problems in polynomial time
If you convert the issue into one form to another form within the
polynomial time
NP-complete Problems:
1. Tractable Problem:
– Sorting a list
Intractable Problem:
This algorithm, however, does not provide an efficient solution and is,
therefore, not feasible for computation with anything more than the
smallest input.