Unit 1-1
Unit 1-1
Unit – 1
Algorithm:
1. Input
2. Output
3. Definiteness
4. Finiteness
5. Effectiveness
o Steps are basic enough to be executed in principle by a human with pencil and paper
within finite time
Advantages of Algorithms:
It is easy to understand.
In an Algorithm the problem is broken down into smaller pieces or steps hence, it is easier
for the programmer to convert it into an actual program.
Disadvantages of Algorithms:
Choose an appropriate algorithm design technique (e.g., divide and conquer, dynamic
programming, greedy algorithms).
Break down the problem into smaller subproblems if necessary.
Define the steps of the algorithm in a clear and concise manner.
Consider how the algorithm will handle various input cases.
Thoroughly test the algorithm with various inputs, including edge cases and boundary
conditions.
Identify and fix any errors or bugs.
Validate that the algorithm produces the correct output for all valid inputs.
6. Optimize the Algorithm:
Create documentation that explains the algorithm's purpose, inputs, outputs, and
implementation details.
Include information about the algorithm's time and space complexity.
Document any assumptions, constraints, or limitations of the algorithm.
Recursive Algorithms
A recursive algorithm calls itself which generally passes the return value as a parameter to the
algorithm again. This parameter indicates the input while the return value indicates the output.
Recursive algorithm is defined as a method of simplification that divides the problem into sub-
problems of the same nature. The result of one recursion is treated as the input for the next
recursion. The repletion is in the self-similar fashion manner. The algorithm calls itself with smaller
input values and obtains the results by simply accomplishing the operations on these smaller values.
Generation of factorial, Fibonacci number series are denoted as the examples of recursive
algorithms.
Example: Writing factorial function using recursion
intfactorialA(int n)
return n * factorialA(n-1);
Time Complexity:
Definition:
Time complexity quantifies the amount of time an algorithm takes to run as a function of the length
of the input.
Measurement:
It's often expressed using Big O notation (e.g., O(n), O(n log n), O(1)), which describes the upper
bound of the growth rate.
Examples:
O(1) (Constant): The algorithm takes the same amount of time regardless of the
input size (e.g., accessing an element in an array by index).
O(n) (Linear): The time increases linearly with the input size (e.g., iterating through a
list once).
O(log n) (Logarithmic): The time increases logarithmically with the input size (e.g.,
binary search).
O(n log n) (Loglinear): The time increases with both linear and logarithmic factors
(e.g., efficient sorting algorithms like merge sort and quicksort).
O(n2) (Quadratic): The time increases quadratically with the input size (e.g., nested
loops iterating through a list).
Importance:
Understanding time complexity helps in choosing the most efficient algorithm for a given task,
especially when dealing with large datasets.
Space Complexity:
Definition:
Space complexity quantifies the amount of memory space an algorithm requires to run as a function
of the input size.
Measurement:
Components:
Space complexity includes the space used by the input, output, and any auxiliary space (temporary
variables, data structures).
Examples:
O(1) (Constant): The algorithm uses a fixed amount of memory regardless of the
input size (e.g., simple calculations).
O(n) (Linear): The memory usage grows linearly with the input size (e.g., storing an
array of n elements).
O(log n) (Logarithmic): The memory usage grows logarithmically with the input size
(e.g., recursive algorithms with a limited recursion depth).
O(n2) (Quadratic): The memory usage grows quadratically with the input size (e.g., a
matrix of n x n elements).
Importance:
Understanding space complexity is crucial for managing memory usage, especially in resource-
constrained environments.
1. Asymptotic Notation
Asymptotic notation is the process of calculating the execution time of an algorithm and also
compare the efficiency of various algorithm. We can calculate the best, average and worst-case
scenarios for an algorithm by using asymptotic notation.
It is lesser execution time, the best performance of an algorithm. Asymptotic notation is used to
express the execution time of an algorithm and also to compare the efficiency of various
algorithm.
Time complexity:
Computing the real running time of a process is not feasible. The time taken for the execution of
any process is dependent on the size of the input.
If the input size ‘n’, then f(n) is a function of ‘n’ denoting the time complexity.
Big - Oh notation is used to define the upper bound of an algorithm in terms of Time
Complexity.
That means Big - Oh notation always indicates the maximum time required by an algorithm
for all input values. That means Big - Oh notation describes the worst case of an algorithm
time complexity.
Big - Oh Notation can be defined as follows...
Consider function f(n) as time complexity of an algorithm and g(n) is the most significant
term.
If f(n) <= C g(n) for all n >= n0, C > 0 and n0 >= 1.
Then we can represent f(n) as O(g(n)).
In above graph after a particular input value n0, always C g(n) is greater than f(n) which
indicates the algorithm's upper bound.
Consider function f(n) as time complexity of an algorithm and g(n) is the most significant term. If f(n)
>= C g(n) for all n >= n0, C > 0 and n0 >= 1. Then we can represent f(n) as Ω(g(n)).
f(n) = Ω(g(n))
Consider the following graph drawn for the values of f(n) and C g(n) for input (n) value on X-Axis and
time required is on Y-Axis
In above graph after a particular input value n0, always C g(n) is less than f(n) which indicates the
algorithm's lower bound.
Big - Theta notation is used to define the average bound of an algorithm in terms of Time
Complexity.
That means Big - Theta notation always indicates the average time required by an algorithm
for all input values. That means Big - Theta notation describes the average case of an
algorithm time complexity.
Big - Theta Notation can be defined as follows...
Consider function f(n) as time complexity of an algorithm and g(n) is the most significant
term.
If C1 g(n) <= f(n) <= C2 g(n) for all n >= n0, C1 > 0, C2 > 0 and n0 >= 1.
Then we can represent f(n) as Θ(g(n)).
f(n) = Θ(g(n))
Consider the following graph drawn for the values of f(n) and C g(n) for input (n) value on X-
Axis and time required is on Y-Axis
In above graph after a particular input value n0, always C1 g(n) is less than f(n) and C2 g(n) is
greater than f(n) which indicates the algorithm's average bound.
3. Combine: Combine the solutions of the sub-problems that are part of the recursive
process to solve the actual problem
Examples of Divide and Conquer Algorithm:
1. Merge Sort
2. Quick Sort
3. Selection Sort
4. Strassen's algorithm
Merge Sort:
Merge Sort is one of the most popular sorting algorithms that is based on the principle
of Divide and Conquer Algorithm.
Example: