1. Algorithm
1. Algorithm
Unit-1 (Algorithms)
Unit-1
What is an Algorithm?
● Algorithm is a step-by-step procedure, which defines a set of instructions to
be executed in a certain order to get the desired output.
1. The problem that is to be solved by this algorithm i.e. clear problem definition.
2. The constraints of the problem must be considered while solving the problem.
3. The input to be taken to solve the problem.
4. The output to be expected when the problem is solved.
5. The solution to this problem, is within the given constraints.
Characteristics of an Algorithm
In order for some instructions to be an algorithm, it must have the following
characteristics:
● Clear and Unambiguous: The algorithm should be clear and unambiguous. Each
of its steps should be clear in all aspects and must lead to only one meaning.
● Well-Defined Inputs: If an algorithm says to take inputs, it should be well-defined
inputs.
● Well-Defined Outputs: The algorithm must clearly define what output will be
yielded and it should be well-defined as well.
● Finite-ness: The algorithm must be finite, i.e. it should terminate after a finite time.
● Feasible: The algorithm must be simple, generic, and practical, such that it can be
executed with the available resources. It must not contain some future technology or
anything.
● Language Independent: The Algorithm designed must be language-independent,
i.e. it must be just plain instructions that can be implemented in any language, and
yet the output will be the same, as expected.
Designing of Algorithms:
There are several types of algorithm techniques. Some important algorithms are:
4. Searching Algorithm: Searching algorithms are the ones that are used for
searching elements or groups of elements from a particular data structure. They
can be of different types based on their approach or the data structure in which
the element should be found.
● Divide
● Solve
● Combine
8. Greedy Algorithm: In this type of algorithm the solution is built part by part.
The solution of the next part is built based on the immediate benefit of the next
part. The one solution giving the most benefit will be chosen as the solution for
the next part.
Analysis of Algorithms:
The Analysis or Complexity of an algorithm refers to the measure of the Time and
Space. Hence these two factors define the efficiency of an algorithm.
Advantages of Algorithms:
● It is easy to understand.
● An algorithm is a step-wise representation of a solution to a given
problem.
● In Algorithm the problem is broken down into smaller pieces or steps
hence, it is easier for the programmer to convert it into an actual
program.
Disadvantages of Algorithms:
● Writing an algorithm takes a long time so it is time-consuming.
● Understanding complex logic through algorithms can be very difficult.
● Branching and Looping statements are difficult to show in Algorithms.
Asymptotic Notations
The main idea of asymptotic analysis is to have a measure of the efficiency of
algorithms that don’t depend on machine-specific constants and time taken by programs
to be compared. Asymptotic notations are mathematical tools to represent the time
complexity of algorithms for asymptotic analysis.
There are mainly three asymptotic notations:
Theta notation encloses the function from above and below. Since it represents
the upper and the lower bound of the running time of an algorithm, it is used for
analyzing the average-case complexity of an algorithm.
Let g and f be the function from the set of natural numbers to itself. The function f
is said to be Θ(g), if there are constants c1, c2 > 0 and a natural number n0 such
that c1* g(n) ≤ f(n) ≤ c2 * g(n) for all n ≥ n0.
Θ (g(n)) = {f(n): there exist positive constants c1, c2 and n0 such that 0 ≤ c1 *
g(n) ≤ f(n) ≤ c2 * g(n) for all n ≥ n0}
The above expression can be described as if f(n) is theta of g(n), then the value
f(n) is always between c1 * g(n) and c2 * g(n) for large values of n (n ≥ n0).
Big-O notation represents the upper bound of the running time of an algorithm.
Therefore, it gives the worst-case complexity of an algorithm.
If f(n) describes the running time of an algorithm, f(n) is O(g(n)) if there exist a
positive constant C and n0 such that, 0 ≤ f(n) ≤ cg(n) for all n ≥ n0.
O(g(n)) = { f(n): there exist positive constants c and n0 such that 0 ≤ f(n) ≤ cg(n)
for all n ≥ n0 }
Omega notation represents the lower bound of the running time of an algorithm.
Thus, it provides the best case complexity of an algorithm.
Let g and f be the function from the set of natural numbers to itself. The function f
is said to be Ω(g), if there is a constant c > 0 and a natural number n0 such that
c*g(n) ≤ f(n) for all n ≥ n0
Ω(g(n)) = { f(n): there exist positive constants c and n0 such that 0 ≤ cg(n) ≤ f(n)
for all n ≥ n0 }
In the worst-case analysis, we calculate the upper limit of the execution time of
an algorithm. It is necessary to know the case which causes the execution of the
maximum number of operations.
In the best case analysis, we calculate the lower bound of the execution time of
an algorithm. It is necessary to know the case which causes the execution of the
minimum number of operations. So time complexity in the best case would be
Ω(1)
In average case analysis, we take all possible inputs and calculate the computing
time for all of the inputs. Sum all the calculated values and divide the sum by the
total number of inputs. We must know (or predict) the distribution of cases.