0% found this document useful (0 votes)
12 views21 pages

Adsa Unit 1

This document discusses the role of algorithms in computing, defining them as a set of finite rules for problem-solving and outlining their importance in various fields. It covers the characteristics, properties, advantages, and disadvantages of algorithms, as well as the significance of time and space complexity in evaluating their efficiency. Additionally, it explains asymptotic notations for analyzing algorithm performance and the different cases for measuring complexity.

Uploaded by

stpblr23
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
12 views21 pages

Adsa Unit 1

This document discusses the role of algorithms in computing, defining them as a set of finite rules for problem-solving and outlining their importance in various fields. It covers the characteristics, properties, advantages, and disadvantages of algorithms, as well as the significance of time and space complexity in evaluating their efficiency. Additionally, it explains asymptotic notations for analyzing algorithm performance and the different cases for measuring complexity.

Uploaded by

stpblr23
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 21

UNIT I

ROLE OF ALGORITHMS IN COMPUTING & COMPLEXITY ANALYSIS


1.1 Algorithms

Definition of Algorithm
The word Algorithm means ” A set of finite rules or instructions to be followed
in calculations or other problem-solving operations ”
Or
” A procedure for solving a mathematical problem in a finite number of steps
that frequently involves recursive operations”.

Therefore, Algorithm refers to a sequence of finite steps to solve a particular


problem.

What is the need for algorithms?


1. Algorithms are necessary for solving complex problems efficiently and
effectively.
2. They help to automate processes and make them more reliable, faster,
and easier to perform.
3. Algorithms also enable computers to perform tasks that would be difficult
or impossible for humans to do manually.
4. They are used in various fields such as mathematics, computer science,
engineering, finance, and many others to optimize processes, analyze
data, make predictions, and provide solutions to problems.

What are the Characteristics of an Algorithm?


As one would not follow any written instructions to cook the recipe, but only
the standard one. Similarly, not all written instructions for programming are
an algorithm. For some instructions to be an algorithm, it must have the
following characteristics:
 Clear and Unambiguous: The algorithm should be unambiguous. Each of
its steps should be clear in all aspects and must lead to only one meaning.
 Well-Defined Inputs: If an algorithm says to take inputs, it should be well-
defined inputs. It may or may not take input.
 Well-Defined Outputs: The algorithm must clearly define what output will
be yielded and it should be well-defined as well. It should produce at least
1 output.
 Finite-ness: The algorithm must be finite, i.e. it should terminate after a
finite time.
 Feasible: The algorithm must be simple, generic, and practical, such that it
can be executed with the available resources. It must not contain some
future technology or anything.
 Language Independent: The Algorithm designed must be language-
independent, i.e. it must be just plain instructions that can be
implemented in any language, and yet the output will be the same, as
expected.
 Input: An algorithm has zero or more inputs. Each that contains a
fundamental operator must accept zero or more inputs.
 Output: An algorithm produces at least one output. Every instruction that
contains a fundamental operator must accept zero or more inputs.
 Definiteness: All instructions in an algorithm must be unambiguous,
precise, and easy to interpret. By referring to any of the instructions in an
algorithm one can clearly understand what is to be done. Every
fundamental operator in instruction must be defined without any
ambiguity.
 Finiteness: An algorithm must terminate after a finite number of steps in
all test cases. Every instruction which contains a fundamental operator
must be terminated within a finite amount of time. Infinite loops or
recursive functions without base conditions do not possess finiteness.
 Effectiveness: An algorithm must be developed by using very basic,
simple, and feasible operations so that one can trace it out by using just
paper and pencil.

Properties of Algorithm:
 It should terminate after a finite time.
 It should produce at least one output.
 It should take zero or more input.
 It should be deterministic means giving the same output for the same
input case.
 Every step in the algorithm must be effective i.e. every step should do
some work.

Advantages of Algorithms:
 It is easy to understand.
 An algorithm is a step-wise representation of a solution to a given
problem.
 In an Algorithm the problem is broken down into smaller pieces or steps
hence, it is easier for the programmer to convert it into an actual program.

Disadvantages of Algorithms:
 Writing an algorithm takes a long time so it is time-consuming.
 Understanding complex logic through algorithms can be very difficult.
 Branching and Looping statements are difficult to show in Algorithms(imp).

Example:
Algorithm to add 3 numbers and print their sum:
1. START
2. Declare 3 integer variables num1, num2, and num3.
3. Take the three numbers, to be added, as inputs in variables num1, num2,
and num3 respectively.
4. Declare an integer variable sum to store the resultant sum of the 3
numbers.
5. Add the 3 numbers and store the result in the variable sum.
6. Print the value of the variable sum
7. END

1.2 Algorithms as a Technology

Suppose computers were infinitely fast and computer memory was free.
Would you have any reason to study algorithms? The answer is yes, if for
no other reason than that you would still like to demonstrate that your
solution method terminates and does so with the correct answer.
If computers were infinitely fast, any correct method for solving a problem
would do. You would probably want your implementation to be within the
bounds of good software engineering practice (i.e., well designed and
documented), but you would most often use whichever method was the
easiest to implement.
Of course, computers may be fast, but they are not infinitely fast. And
memory may be cheap, but it is not free. Computing time is therefore a
bounded resource, and so is space in memory. These resources should be
used wisely, and algorithms that are efficient in terms of time or space will
help you do so.
Efficiency
Algorithms devised to solve the same problem often differ dramatically in
their efficiency. These differences can be much more significant than
differences due to hardware and software.

As an example, in Chapter 2, we will see two algorithms for sorting. The


first, known as insertion sort, takes time roughly equal to c1n2 to
sort n items, where c1 is a constant that does not depend on n. That is, it
takes time roughly proportional to n2. The second, merge sort, takes time
roughly equal to c2n lg n, where lg n stands for log2 n and c2 is another
constant that also does not depend on n. Insertion sort usually has a
smaller constant factor than merge sort, so that c1 < c2. We shall see that
the constant factors can be far less significant in the running time than the
dependence on the input size n. Where merge sort has a factor of lg n in
its running time, insertion sort has a factor of n, which is much larger.
Although insertion sort is usually faster than merge sort for small input
sizes, once the input size n becomes large enough, merge sort's
advantage of lg n vs. n will more than compensate for the difference in
constant factors. No matter how much smaller c1 is than c2, there will
always be a crossover point beyond which merge sort is faster.
For a concrete example, let us pit a faster computer (computer A) running
insertion sort against a slower computer (computer B) running merge sort.
They each must sort an array of one million numbers. Suppose that
computer A executes one billion instructions per second and computer B
executes only ten million instructions per second, so that computer A is
100 times faster than computer B in raw computing power. To make the
difference even more dramatic, suppose that the world's craftiest
programmer codes insertion sort in machine language for computer A, and
the resulting code requires 2n2 instructions to sort n numbers. (Here, c1 =
2.) Merge sort, on the other hand, is programmed for computer B by an
average programmer using a high-level language with an inefficient
compiler, with the resulting code taking 50n lg n instructions (so that c2 =
50). To sort one million numbers, computer A takes

while computer B takes

By using an algorithm whose running time grows more slowly, even with a
poor compiler, computer B runs 20 times faster than computer A! The
advantage of merge sort is even more pronounced when we sort ten
million numbers: where insertion sort takes approximately 2.3 days, merge
sort takes under 20 minutes. In general, as the problem size increases, so
does the relative advantage of merge sort.
Algorithms and other technologies
The example above shows that algorithms, like computer hardware, are
a technology. Total system performance depends on choosing efficient
algorithms as much as on choosing fast hardware. Just as rapid advances
are being made in other computer technologies, they are being made in
algorithms as well.
You might wonder whether algorithms are truly that important on
contemporary computers in light of other advanced technologies, such as
 hardware with high clock rates, pipelining, and superscalar architectures,
 easy-to-use, intuitive graphical user interfaces (GUIs),
 object-oriented systems, and
 local-area and wide-area networking.
The answer is yes. Although there are some applications that do not
explicitly require algorithmic content at the application level (e.g., some
simple web-based applications), most also require a degree of algorithmic
content on their own. For example, consider a web-based service that
determines how to travel from one location to another. (Several such
services existed at the time of this writing.) Its implementation would rely
on fast hardware, a graphical user interface, wide-area networking, and
also possibly on object orientation. However, it would also require
algorithms for certain operations, such as finding routes (probably using a
shortest-path algorithm), rendering maps, and interpolating addresses.
Moreover, even an application that does not require algorithmic content at
the application level relies heavily upon algorithms. Does the application
rely on fast hardware? The hardware design used algorithms. Does the
application rely on graphical user interfaces? The design of any GUI relies
on algorithms. Does the application rely on networking? Routing in
networks relies heavily on algorithms. Was the application written in a
language other than machine code? Then it was processed by a compiler,
interpreter, or assembler, all of which make extensive use of algorithms.
Algorithms are at the core of most technologies used in contemporary
computers.
Furthermore, with the ever-increasing capacities of computers, we use
them to solve larger problems than ever before. As we saw in the above
comparison between insertion sort and merge sort, it is at larger problem
sizes that the differences in efficiencies between algorithms become
particularly prominent.
Having a solid base of algorithmic knowledge and technique is one
characteristic that separates the truly skilled programmers from the
novices. With modern computing technology, you can accomplish some
tasks without knowing much about algorithms, but with a good
background in algorithms, you can do much, much more.

1.3 Time and Space complexity of algorithms

Time Complexity
Time complexity is defined in terms of how many times it takes to run a
given algorithm, based on the length of the input. Time complexity is not a
measurement of how much time it takes to execute a particular algorithm
because such factors as programming language, operating system, and
processing power are also considered.
Time complexity is a type of computational complexity that describes the
time required to execute an algorithm. The time complexity of an
algorithm is the amount of time it takes for each statement to complete.
As a result, it is highly dependent on the size of the processed data. It also
aids in defining an algorithm's effectiveness and evaluating its
performance.

Space Complexity
When an algorithm is run on a computer, it necessitates a certain amount
of memory space. The amount of memory used by a program to execute it
is represented by its space complexity. Because a program requires
memory to store input data and temporal values while running, the space
complexity is auxiliary and input space.

What Does It Take to Develop a Good Algorithm?

A good algorithm executes quickly and saves space in the process. You
should find a happy medium of space and time (space and time
complexity), but you can do with the average. Now, take a look at a simple
algorithm for calculating the "mul" of two numbers.

Step 1: Start.
Step 2: Create two variables (a & b).
Step 3: Store integer values in ‘a’ and ‘b.’ -> Input
Step 4: Create a variable named 'mul'
Step 5: Store the mul of 'a' and 'b' in a variable named 'mul" -> Output
Step 6: End.

How Significant Are Space and Time Complexity?


Significant in Terms of Time Complexity
The input size has a strong relationship with time complexity. As the size
of the input increases, so does the runtime, or the amount of time it takes
the algorithm to run.
Here is an example.
Assume you have a set of numbers S= (10, 50, 20, 15, 30)
There are numerous algorithms for sorting the given numbers. However,
not all of them are effective. To determine which is the most effective, you
must perform computational analysis on each algorithm.

Here are some of the most critical findings from the graph:
 This test revealed the following sorting algorithms: Quicksort, Insertion
sort, Bubble sort, and Heapsort.
 Python is the programming language used to complete the task, and the
input size ranges from 50 to 500 characters.
 The results were as follows: "Heap Sort algorithms performed well despite
the length of the lists; on the other hand, you discovered that Insertion
sort and Bubble sort algorithms performed far worse, significantly
increasing computing time." See the graph above for the results.
 Before you can run an analysis on any algorithm, you must first determine
its stability. Understanding your data is the most important aspect of
conducting a successful analysis.

1.4 Asymptotic Notations

Asymptotic analysis of an algorithm refers to defining the mathematical


foundation/framing of its run-time performance. Using asymptotic
analysis, we can very well conclude the best case, average case, and
worst-case scenario of an algorithm.

Asymptotic Analysis is defined as the big idea that handles the above
issues in analysing algorithms. In Asymptotic Analysis, we evaluate the
performance of an algorithm in terms of input size (we don’t measure the
actual running time). We calculate, how the time (or space) taken by an
algorithm increases with the input size.
Asymptotic analysis is input bound i.e., if there's no input to the algorithm,
it is concluded to work in a constant time. Other than the "input" all other
factors are considered constant.
Asymptotic analysis refers to computing the running time of any operation
in mathematical units of computation. For example, the running time of
one operation is computed as f(n) and may be for another operation it is
computed as g(n2). This means the first operation running time will
increase linearly with the increase in n and the running time of the second
operation will increase exponentially when n increases. Similarly, the
running time of both operations will be nearly the same if n is significantly
small.

Usually, the time required by an algorithm falls under three types −


 Best Case − Minimum time required for program execution.
 Average Case − Average time required for program execution.
 Worst Case − Maximum time required for program execution.

1.5 Measurement of Complexity of an Algorithm


Based on the above three notations of Time Complexity there are three cases to
analyze an algorithm:
1. Worst Case Analysis (Mostly used)
In the worst-case analysis, we calculate the upper bound on the running time of
an algorithm. We must know the case that causes a maximum number of
operations to be executed. For Linear Search, the worst case happens when the
element to be searched (x) is not present in the array. When x is not present, the
search() function compares it with all the elements of arr[] one by one.
Therefore, the worst-case time complexity of the linear search would be O(n).
2. Best Case Analysis (Very Rarely used)
In the best-case analysis, we calculate the lower bound on the running time of an
algorithm. We must know the case that causes a minimum number of operations
to be executed. In the linear search problem, the best case occurs when x is
present at the first location. The number of operations in the best case is
constant (not dependent on n). So time complexity in the best case would be ?
(1)
3. Average Case Analysis (Rarely used)
In average case analysis, we take all possible inputs and calculate the computing
time for all of the inputs. Sum all the calculated values and divide the sum by the
total number of inputs. We must know (or predict) the distribution of cases. For
the linear search problem, let us assume that all cases are uniformly
distributed (including the case of x not being present in the array). So we sum all
the cases and divide the sum by (n+1). Following is the value of average-case
time complexity.
1. Worst Case Analysis:
Most of the time, we do worst-case analyses to analyze algorithms. In the worst
analysis, we guarantee an upper bound on the running time of an algorithm
which is good information.
2. Average Case Analysis
The average case analysis is not easy to do in most practical cases and it is
rarely done. In the average case analysis, we must know (or predict) the
mathematical distribution of all possible inputs.
3. Best Case Analysis
The Best-Case analysis is bogus. Guaranteeing a lower bound on an algorithm
doesn’t provide any information as in the worst case, an algorithm may take
years to run.
1.6 Asymptotic Notations
Asymptotic notations are the mathematical notations used to describe the
running time of an algorithm when the input tends towards a particular
value or a limiting value.
For example: In bubble sort, when the input array is already sorted, the
time taken by the algorithm is linear i.e. the best case.
But, when the input array is in reverse condition, the algorithm takes the
maximum time (quadratic) to sort the elements i.e. the worst case.
When the input array is neither sorted nor in reverse order, then it takes
average time. These durations are denoted using asymptotic notations.

Types of Asymptotic Notations


Mainly there are three different types of asymptotic notations:
 Big O Notation
 Omega Notation
 Theta Notation

Now, we will discuss them one by one in complete detail.

Big-O Notation (O)


Big O notation is a mathematical concept used in computer science to
describe the upper bound of an algorithm's time or space complexity. It
provides a way to express the worst-case scenario of how an algorithm
performs as the size of the input increases.
Mathematical Representation of Big O Notation
A function f(n) is said to be O(g(n)) if there exist positive constants c 0 and
c1 such that 0 ≤ f(n) ≤ c0*g(n) for all n ≥ c1. This means that for
sufficiently large values of n, the function f(n) does not grow faster than
g(n) up to a constant factor.
O(g(n)) = {f(n): there exist positive constants c 0 and c1 such that 0 ≤ f(n)
≤ c0g(n) for all n ≥ c1}.
For Example:
Let, f(n) = n2 + n + 1
g(n) = n2
n2 + n + 1 <= c (n2)
The time complexity of the above function is O(n 2), because the above
function has to run for n2 time at least.

Omega Notation(Ω)
Omega notation is used to denote the lower bound of the algorithm; it represents
the minimum running time of an algorithm. Therefore, it provides the best-case
complexity of any algorithm.
Ω(g(n)) = {f(n): there exist positive constants c 0 and c1, such that 0 ≤
c0g(n) ≤ f(n) for all n ≥ c1}.
For Example:
Let,
 f(n) = n2 + n
Then, the best-case time complexity will be Ω(n2)
 f(n) = 100n + log(n)
Then, the best-case time complexity will be Ω(n).

Theta Notation(θ)
Theta notation is used to denote the average bound of the algorithm; it
bounds a function from above and below, that’s why it is used to represent
exact asymptotic behaviour.
Θ(g(n)) = {f(n): there exist positive constants c 0, c1 and c2, such that 0 ≤
c0g(n) ≤ f(n) ≤ c1g(n) for all n ≥ c2}

Limitations of Asymptotic Analysis


 Dependence on Large Input Size: Asymptotic analysis heavily depends
on large input size (i.e., value tends to infinity). But in reality, the input
may not always be sufficiently large.
 Ignores Constant Factors: Asymptotic analysis mainly focuses on the
algorithm’s growth rate (i.e., highest value) and discards the smaller
terms.
 Doesn’t Indicate the Exact Running Time: It approximates how
running time grows with the size of input but doesn’t provide the precise
running time.
 Doesn’t Consider Memory Usage: It typically focuses on running time
and ignores memory usage or other resources unless specifically dealing
with space complexity.
 Ignores Coefficient of Dominant Terms: Similar to the smaller terms, it
also ignores the coefficient of the dominant term, i.e., if two algorithms
have the same dominant term, they would be considered equivalent.
 Doesn’t Holds for All Algorithms: Algorithms such as randomized
algorithms and algorithms with complex control structures may not be
well-suited for traditional asymptotic analysis.

1.7 Importance of efficient algorithms

Algorithmic efficiency enables better software. The efficiency of the algorithms

used in an application can impact its overall performance; hence, the importance
of measuring the performance and complexity of algorithms through means that

are accessible not only to mathematicians but also to any software engineer who

aims to excel in algorithm design.

Algorithm complexity measurement should be part of the design process. In order

to measure how complex an algorithm is, and to be able to spot the appropriate

parts of the algorithm composition in order to solve inconsistencies in algorithm

design and analysis, the software engineer has to have an accessible way to

understand an algorithm’s complexity in simple terms and come up with viable

and efficient solutions to simplify it where possible.

In this series on algorithm design, we will explore Big O notation as a

comprehensive way to understand and avoid falling into algorithm design errors

that can impact the efficiency of our system. We will explore the fundamentals of

algorithm complexity analysis when it comes

to constant, linear and quadratic problem sizes. We will also demonstrate (by

implementing pseudocode) how to spot unnecessary complexity so that we can

prevent it from appearing in our code.

Follow along to explore the world of Big O notation and start implementing the

knowledge in real-world scenarios.

Big O Notation

Big O notation (with a capital letter O, not a zero), also called Landau’s symbol, is

a symbolism used in complexity theory, computer science, and mathematics to

describe the asymptotic behavior of functions. Basically, it tells you how fast a

function grows or declines. 1


Big O notation comes in to simplify the complexity measurement process (the

approximation). The reason (or at least one of the main reasons) why Big O

notation has become widespread as a valid and reliable way to measure

complexity may be due to the simplicity of the notation itself.

In Big O notation, you must ignore constants. Constants are determined by (very

often) uncontrollable factors such as network, hardware, memory and storage.

Therefore, it would make sense to “trim” them from the context since the

algorithm will be executed in an uncontrollable environment, isolated from direct

and indirect intervention; in other words, algorithms must be measured while

keeping in mind the worst-case scenario.

Algorithm Complexity and Performance

Algorithm complexity can be translated as the processing power implemented by

an algorithm in a certain amount of time. Such complexity is tightly related to the

central processing unit (CPU) usage. Perhaps here a point needs to be clarified

regarding complexity and performance: the former is the amount of resources

needed for a given program to execute a set of tasks, while the latter refers to

how well the program executes these tasks.

The relationship between the two is a one-way relationship, with complexity

affecting performance. We can then say that performance is a function of

complexity, and therefore it is critical to understand the complexity of an

algorithm before calculating its performance. This is a relationship between the

number of operations and the problem size.


A constant-time algorithm is less complex than a linear-time algorithm, which, in

turn, is less complex than a quadratic-time algorithm: O(1) < O(N) < O(N2).

1.8 Program performance measurement

1) Time complexity
One of the most common ways to measure algorithm performance is time
complexity, which is the amount of time it takes for an algorithm to
complete its task as a function of the input size. Time complexity is usually
expressed using the big O notation, which gives the upper bound of the
worst-case scenario. For example, O(n) means that the algorithm takes at
most linear time proportional to the input size, while O(n^2) means that it
takes quadratic time. Time complexity helps you estimate how your
algorithm scales with larger inputs and how it compares to other
algorithms with different big O values.
2) Space complexity
Another way to measure algorithm performance is space complexity,
which is the amount of memory or storage that an algorithm uses as a
function of the input size. Space complexity is also expressed using the big
O notation, but with a different meaning. For example, O(1) means that
the algorithm uses a constant amount of space regardless of the input
size, while O(n) means that it uses linear space proportional to the input
size. Space complexity helps you evaluate how your algorithm manages
the resources and how it affects the overall performance of your program.

3) Runtime analysis
A more practical way to measure algorithm performance is runtime
analysis, which is the actual measurement of how long an algorithm takes
to run on a specific machine or environment. Runtime analysis can be
done using various tools, such as timers, profilers, or benchmarks, that
record the execution time of an algorithm or a part of it. Runtime analysis
helps you test your algorithm on real data and conditions and identify any
bottlenecks or inefficiencies that may not be obvious from the theoretical
analysis.

4) Asymptotic analysis
A more general way to measure algorithm performance is asymptotic
analysis, which is the study of how an algorithm behaves as the input size
approaches infinity. Asymptotic analysis is useful for comparing algorithms
that have different time or space complexities and for determining the
best or worst cases. Asymptotic analysis uses different notations, such as
big O, big Theta, and big Omega, to describe the upper, lower, or tight
bounds of an algorithm's performance. Asymptotic analysis helps you
understand the fundamental limits and trade-offs of your algorithm and
how it relates to the problem domain.

5) Empirical analysis
A more experimental way to measure algorithm performance is empirical
analysis, which is the observation and evaluation of how an algorithm
performs on various inputs and scenarios. Empirical analysis can be done
using different methods, such as simulations, case studies, or user studies,
that generate or collect data and feedback on an algorithm's output or
behavior. Empirical analysis helps you validate your algorithm's
correctness, accuracy, robustness, or usability and how it meets the
requirements and expectations of the users or clients.
6) Algorithm design techniques
One of the best ways to improve algorithm performance is to use
algorithm design techniques, which are strategies or principles that guide
you to create efficient and effective algorithms. Algorithm design
techniques can be divided into two categories: problem reduction and
solution construction. Problem reduction techniques, such as divide and
conquer, dynamic programming, or greedy algorithms, help you simplify or
transform a complex problem into smaller or easier subproblems. Solution
construction techniques, such as brute force, backtracking, or heuristics,
help you generate or search for a feasible or optimal solution for a
problem. Algorithm design techniques help you optimize your algorithm's
time and space complexity, runtime, asymptotic behavior, or empirical
performance.

1.9 Recurrence Relations – Substitution Method


Recursion is a method to solve a problem where the solution depends on
solutions to smaller subproblems of the same problem. Recursive functions
(function calling itself) are used to solve problems based on Recursion. The
main challenge with recursion is to find the time complexity of the
Recursive function. In this article, we will learn about how to find the time
complexity of Recursive functions using Substitution Method.

What is a Recurrence Relation?


Whenever any function makes a recursive call to itself, its time can be
computed by a Recurrence Relation. Recurrence Relation is simply a
mathematical relation/equation that can give the value of any term in
terms of some previous smaller terms. For example,
T(n) = T(n-1) + N
It is a recurrence relation because the value of the nth term is given in its
previous term i.e (n-1)the term.

Types of Recurrence Relation:


There are different types of recurrence relation that can be possible in the
mathematical world. Some of them are-
1. Linear Recurrence Relation: In case of Linear Recurrence Relation
every term is dependent linearly on its previous term. Example of Linear
Recurrence Relation can be
T(n) = T(n-1) + T(n-2) + T(n-3)
2. Divide and Conquer Recurrence Relation: It the type of Recurrence
Relation which is obtained from Divide and Conquer Algorithm. Example of
such recurrence relation can be
T(n) = 3T(n/2) + 9n
3. First Order Recurrence Relation: It is the type of recurrence relation
in which every term is dependent on just previous term. Example of this
type of recurrence relation can be-
T(n) = T(n-1)2
(4) Higher Order Recurrence Relation– It is the type of recurrence
relation where one term is not only dependent on just one previous term
but on multiple previous terms. If it will be dependent on two previous
term then it will be called to be second order. Similarly, for three previous
term its will be called to be of third order and so on. Let us see example of
an third order Recurrence relation
T(n) = 2T(n-1)2 + KT(n-2) + T(n-3)
Till now we have seen different recurrence relations but how to find time
taken by any recursive algorithm. So to calculate time we need to solve
the recurrence relation. Now for solving recurrence we have three famous
methods-
 Substitution Method
 Recursive Tree Method
 Master Theorem
Now in this article we are going to focus on Substitution Method.

Substitution Method:
Substitution Method is very famous method for solving any recurrences.
There are two types of substitution methods-
1. Forward Substitution
2. Backward Substitution
1. Forward Substitution:
It is called Forward Substitution because here we substitute recurrence of
any term into next terms. It uses following steps to find Time using
recurrences-
 Pick Recurrence Relation and the given initial Condition
 Put the value from previous recurrence into the next recurrence
 Observe and Guess the pattern and the time
 Prove that the guessed result is correct using mathematical Induction.
Now we will use these steps to solve a problem. The problem is-
T(n) = T(n-1) + n, n>1
T(n) = 1, n=1
Now we will go step by step-
1. Pick Recurrence and the given initial Condition:
T(n)=T(n-1)+n, n>1T(n)=1, n=1
2. Put the value from previous recurrence into the next
recurrence:
T(1) = 1T(2) = T(1) + 2 = 1 + 2 = 3T(3) = T(2) + 3 = 1 + 2 + 3 = 6T(4)=
T(3) + 4 = 1 + 2 + 3 + 4 = 10
3. Observe and Guess the pattern and the time:
So guessed pattern will be-T(n) = 1 + 2 + 3 …. + n = (n * (n+1))/2Time
Complexity will be O(n2)
4. Prove that the guessed result is correct using mathematical
Induction:
 Prove T(1) is true:
T(1) = 1 * (1+1)/2 = 2/2 = 1 and from definition of recurrence we know
T(1) = 1. Hence proved T(1) is true
 Assume T(N-1) to be true:
Assume T(N-1) = ((N – 1) * (N-1+1))/2 = (N * (N-1))/2 to be true
 Then prove T(N) will be true:T(N) = T(N-1) + N from recurrence definition
Now, T(N-1) = N * (N-1)/2So, T(N) = T(N-1) + N = (N * (N-1))/2 + N = (N *
(N-1) + 2N)/2 =N * (N+1)/2And from our guess also T(N)=N(N+1)/2Hence
T(N) is true.Therefore our guess was correct and time will be O(N2)

2. Backward Substitution:
It is called Backward Substitution because here we substitute recurrence
of any term into previous terms. It uses following steps to find Time using
recurrences-
 Take the main recurrence and try to write recurrences of previous terms
 Take just previous recurrence and substitute into main recurrence
 Again take one more previous recurrence and substitute into main
recurrence
 Do this process until you reach to the initial condition
 After this substitute the the value from initial condition and get the
solution
Now we will use these steps to solve a problem. The problem is-
T(n) = T(n-1) + n, n>1T(n) = 1, n = 1
Now we will go step by step-
1. Take the main recurrence and try to write recurrences of
previous terms:
T(n) = T(n-1) + nT(n-1) = T(n-2) + n – 1T(n-2) = T(n-3) + n – 2
2. Take just previous recurrence and substitute into main
recurrence
put T(n-1) into T(n)So, T(n)=T(n-2)+ n-1 + n
3. Again take one more previous recurrence and substitute into
main recurrence
put T(n-2) into T(n)So, T(n)=T(n-3)+ n-2 + n-1 + n
4. Do this process until you reach to the initial condition
So similarly we can find T(n-3), T(n-4)……and so on and can insert into
T(n). Eventually we will get following: T(n)=T(1) + 2 + 3 + 4 +………+ n-1
+n
5. After this substitute the the value from initial condition and get
the solution
Put T(1)=1, T(n) = 1 +2 +3 + 4 +…………..+ n-1 + n = n(n+1)/2. So Time
will be O(N2)

Limitations of Substitution method:


The Substitution method is a useful technique to solve recurrence
relations, but it also has some limitations. Some of the limitations are:
 It is not guaranteed that we will find the solution as substitution method is
based on guesses.
 It doesn’t provide guidance on how to make an accurate guess, often
relying on intuition or trial and error.
 It may only yield a specific or approximate solution rather than the most
general or precise one.
 The substitution method isn’t universally applicable to all recurrence
relations, especially those with complex or variable forms that do not get
simplified using substitution.

1.10 Recursion
The process in which a function calls itself directly or indirectly is called
recursion and the corresponding function is called a recursive function.
Using a recursive algorithm, certain problems can be solved quite easily.
Examples of such problems are Towers of Hanoi (TOH), In
order/Preorder/Post order Tree Traversals, DFS of Graph, etc. A recursive
function solves a particular problem by calling a copy of itself and solving
smaller subproblems of the original problems. Many more recursive calls
can be generated as and when required. It is essential to know that we
should provide a certain case in order to terminate this recursion process.
So, we can say that every time the function calls itself with a simpler
version of the original problem.
Need of Recursion
Recursion is an amazing technique with the help of which we can reduce the
length of our code and make it easier to read and write. It has certain
advantages over the iteration technique which will be discussed later. A task that
can be defined with its similar subtask, recursion is one of the best solutions for
it. For example; The Factorial of a number.
Properties of Recursion:
 Performing the same operations multiple times with different inputs.
 In every step, we try smaller inputs to make the problem smaller.
 Base condition is needed to stop the recursion otherwise infinite loop will
occur.
How a particular problem is solved using recursion?
The idea is to represent a problem in terms of one or more smaller problems, and
add one or more base conditions that stop the recursion. For example, we
compute factorial n if we know the factorial of (n-1). The base case for factorial
would be n = 0. We return 1 when n = 0.
1.11 Tree Method
The Recursion Tree Method is a way of solving recurrence relations. In this
method, a recurrence relation is converted into recursive trees. Each node
represents the cost incurred at various levels of recursion. To find the total cost,
costs of all levels are summed up.
Steps to solve recurrence relation using recursion tree method:
1. Draw a recursive tree for given recurrence relation
2. Calculate the cost at each level and count the total no of levels in the
recursion tree.
3. Count the total number of nodes in the last level and calculate the cost of
the last level
4. Sum up the cost of all the levels in the recursive tree

Steps of Recursion Tree method

There are mainly three steps in the recursion tree method. In this section,
we will learn each of them one by one.
Step 1
 Construct a recursion tree from the recurrence relation at hand.
Step 2
 Find the total number of levels in the recursion tree.
 Compute the cost of each level in the tree.
 Calculate the total number of nodes in the last level or the leaf nodes.
 Compute the cost of the last level.
Step 3
 Sum up the cost of all the levels and decompose the expression obtained
in the standard asymptotic notation.

1.12 Data Structures and Algorithms

Data Structures is about how data can be stored in different structures.


Algorithms is about how to solve different problems, often by searching
through and manipulating data structures.
Theory about Data Structures and Algorithms (DSA) helps us to use large
amounts of data to solve problems efficiently.

Data structures and algorithms (DSA) go hand in hand. A data structure is


not worth much if you cannot search through it or manipulate it efficiently
using algorithms, and the algorithms in this tutorial are not worth much
without a data structure to work on.
DSA is about finding efficient ways to store and retrieve data, to perform
operations on data, and to solve specific problems.
By understanding DSA, you can:
 Decide which data structure or algorithm is best for a given situation.
 Make programs that run faster or use less memory.
 Understand how to approach complex problems and solve them in a
systematic way.

Where are Data Structures and Algorithms Needed?


Data Structures and Algorithms (DSA) are used in virtually every software
system, from operating systems to web applications:
 For managing large amounts of data, such as in a social network or a
search engine.
 For scheduling tasks, to decide which task a computer should do first.
 For planning routes, like in a GPS system to find the shortest path from A
to B.
 For optimizing processes, such as arranging tasks so they can be
completed as quickly as possible.
 For solving complex problems: From finding the best way to pack a truck
to making a computer 'learn' from data.
DSA is fundamental in nearly every part of the software world:
 Operating Systems
 Database Systems
 Web Applications
 Machine Learning
 Video Games
 Cryptographic Systems
 Data Analysis
 Search Engines

You might also like