0% found this document useful (0 votes)
12 views43 pages

NNNNNN

assignment work

Uploaded by

fractaldiffusion
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
12 views43 pages

NNNNNN

assignment work

Uploaded by

fractaldiffusion
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 43

MODULE – 1

INTRODUCTION

BMS Institute of Technology and Mgmt


1
Agenda
✔ What is an Algorithm?
✔ Algorithm Specification
✔ Analysis Framework
✔ Performance Analysis: Space complexity, Time complexity
✔ Asymptotic Notations: Big-Oh notation (O), Omega notation (Ω),
Theta notation (Θ), and Little-oh notation (o)
✔ Mathematical analysis of Non-Recursive
✔ Recursive Algorithms with Examples .
✔ Important Problem Types: Sorting, Searching, String processing, Graph
Problems, Combinatorial Problems.

✔ Fundamental Data Structures: Stacks, Queues, Graphs, Trees, Sets and


Dictionaries.

Department of ISE 3
Learning Outcomes of
Module -1
Students will be able to
✔ Representing real world problem into algorithmic notation.
✔ Performance analysis of an algorithm.
✔ Important problem types.
✔ Fundamental Data structures.

DSU
Department of ISE BMS Institute of Technology and Mgmt 4
What is an algorithm?
Algorithmic: The sprit of computing – David Harel.

Another reason for studying algorithms is their usefulness in developing


analytical skills.

Algorithms can be seen as special kinds of solutions to problems


– not answers but rather precisely defined procedures for getting
answers.

DSU
Department of ISE BMS Institute of Technology and Mgmt 5
What is an algorithm?
Recipe, process, method, technique,
procedure, routine,… with the following
requirements:
1. Finiteness
♌ terminates after a finite number of steps
2. Definiteness
♌ rigorously and unambiguously specified
3. Clearly specified input
♌ valid inputs are clearly specified
4. Clearly specified/expected output
♌ can be proved to produce the correct output given a valid input
5. Effectiveness
♌ steps are sufficiently simple and basic
BMS Institute of Technology and Mgmt 6
Algorithm

• Can be represented in various forms


• Unambiguity/clearness
• Effectiveness
• Finiteness/termination
• Correctness

DSU
Department of ISE BMS Institute of Technology and Mgmt 7
What is an algorithm?
An algorithm is a sequence of unambiguous instructions
for solving a problem, i.e., for obtaining a required
output for any legitimate input in a finite amount of
time.
Problem

Algorithm

Input “Computer” Output

Department of CA Department of ISE BMS Institute of Technology and Mgmt 8


Why study
algorithms?
• Theoretical importance

– the core of computer science

• Practical importance

– A practitioner’s toolkit of known algorithms

– Framework for designing and analyzing algorithms for new


problems

DSU
Department of ISE BMS Institute of Technology and Mgmt 9
Euclid’s
Algorithm
Problem: Find gcd(m,n), the greatest common divisor of two
nonnegative, not both zero integers m and n

Examples: gcd(60,24) = 12, gcd(60,0) = 60, gcd(0,0) = ?

Euclid’s algorithm is based on repeated application of equality


gcd(m,n) = gcd(n, m mod n)
until the second number becomes 0, which makes the
problem trivial.

Example: gcd(60,24) = gcd(24,12) = gcd(12,0) = 12

DSU BMS Institute of Technology and Mgmt 10


Two descriptions of Euclid’s
algorithm
Step 1 If n = 0, return m and stop; otherwise go to Step 2
Step 2 Divide m by n and assign the value of the remainder to r
Step 3 Assign the value of n to m and the value of r to n. Go
to Step 1.

while n ≠ 0 do
r ← m mod n
m← n
n←r
return m

BMS Institute of Technology and Mgmt 11


Other methods for
computing
gcd(m,n)
Consecutive integer checking algorithm
Step 1 Assign the value of min{m,n} to t
Step 2 Divide m by t. If the remainder is 0, go to Step 3; otherwise, go to
Step 4
Step 3 Divide n by t. If the remainder is 0, return t and stop; otherwise, go to
Step 4
Step 4 Decrease t by 1 and go to Step 2

Is this slower than Euclid’s algorithm? How much


slower?

DSU
Department of ISE BMS Institute of Technology and Mgmt 12
Other methods for
gcd(m,n)[cont.]
Middle-school procedure
Step 1 Find the prime factorization of m
Step 2 Find the prime factorization of n
Step 3 Find all the common prime
factors
Step 4 Compute the product of all the common prime
factors
and return it as gcd(m,n)

Is this an algorithm?

How efficient is it?


DSU BMS Institute of Technology and Mgmt 13
Fundamental steps in solving problems

✔ Statement of the problem


✔ Development of mathematical model
✔ Design of the algorithm
✔ Correctness of the algorithm
✔ Analysis of algorithm for its time and space
complexity
✔ Implementation
✔ Program testing and debugging
✔ Documentation
DSU BMS Institute of Technology and Mgmt 16
The two common method to find run time of a program is
1. Operation Count
2. Steps Count

Operation Count

The operation count method is a technique used to analyze the time complexity of
algorithms by counting the number of basic operations performed as a function of
the input size.
1.Identify the Basic Operations: Begin by identifying the basic operations that the
algorithm performs. These operations can be simple arithmetic operations (e.g.,
addition, subtraction, multiplication), comparisons (e.g., less than, equal to),
assignments, or any other fundamental operations that are executed repeatedly.
2.Count the Operations: For each basic operation, determine how many times it is
executed based on the input size (n). To do this, you may need to examine the
algorithm's loops, recursive calls, and conditional statements. Keep in mind that the
number of operations might vary depending on the specific input data and any early
termination conditions.
Example

def array_sum(arr):
sum = 0
for element in arr:
sum += element
return sum
Now, let's go through the steps of the operation count method to find the time complexity:

Step 1: Identify the Basic Operations


- Assigning a value to the variable "sum" (sum = 0).
- Addition operation inside the loop (sum += element).

Step 2: Count the Operations


- The assignment operation is performed only once.
- The addition operation inside the loop is executed "n" times, where "n" is the size of the
input array.

Step 3: Express the Operation Count


The total number of basic operations can be expressed as a function of the input size "n":

T(n) = 1 (assignment) + n (addition inside the loop)


T(n) = 1 (assignment) + n (addition inside the loop)

Step 4: Simplify the Expression


Since we are interested in the time complexity, we can ignore the constant term (1) and
only consider the dominant term (n) because it represents the most significant factor as "n"
approaches infinity.

T(n) ≈ n
STEPS COUNT-

Here we attempt to find the time spent in all parts of the program.
Asymptotic Analysis of algorithms (Growth of function)
Resources for an algorithm are usually expressed as a function regarding input. Often this
function is messy and complicated to work. To study Function growth efficiently, we reduce
the function down to the important part.
Let f (n) = an2+bn+c
In this function, the n2 term dominates the function that is when n gets sufficiently large.
Dominate terms are what we are interested in reducing a function, in this; we ignore all
constants and coefficient and look at the highest order term concerning n.
Asymptotic notation:
The word Asymptotic means approaching a value or curve arbitrarily closely (i.e., as some
sort of limit is taken).
Asymptotic means study of functions of parameter n and n becomes larger and larger without
bound.
Here we are concerned about how the running time of an algorithm increases with the size of
the input.

Asymptotic notations are used to write fastest and slowest possible running time for an
algorithm. These are also referred to as 'best case' and 'worst case' scenarios respectively.
"In asymptotic notations, we derive the complexity concerning the size of the input.
(Example in terms of n)"
"These notations are important because without expanding the cost of running the algorithm,
we can estimate the complexity of the algorithms."
Asymptotic Notations:
Asymptotic Notation is a way of comparing function that ignores constant factors and small
input sizes. Three notations are used to calculate the running time complexity of an
algorithm:

1. Big-oh notation: Big-oh is the formal method of expressing the upper bound of an
algorithm's running time. It is the measure of the longest amount of time. The function f (n) =
O (g (n)) [read as "f of n is big-oh of g of n"] if and only if exist positive constant c and such
that
1.f (n) ⩽ k.g (n) for n>=n0 in all case
Hence, function g (n) is an upper bound for function f (n), as g (n) grows faster than f (n)
A
For Example:
1.1. 3n+2=O(n) as 3n+2≤4n for all n≥2
2.2. 3n+3=O(n) as 3n+3≤4n for all n≥3
Hence, the complexity of f(n) can be represented as O (g (n))
2. Omega () Notation: The function f (n) = Ω (g (n)) [read as "f of n is omega of g of n"] if
and only if there exists positive constant c and n0 such that F (n) ≥ k* g (n) for all n, n≥ n0
For Example:
f (n) =8n2+2n-3≥8n2-3 =7n2+(n2-3)≥7n2 (g(n)) Thus, k1=7
Hence, the complexity of f (n) can be represented as Ω (g (n))
3. Theta (θ): The function f (n) = θ (g (n)) [read as "f is the theta of g of n"] if and only if
there exists positive constant k1, k2 and k0 such that
k1 * g (n) ≤ f(n)≤ k2 g(n)for all n, n≥ n0
For Example:
3n+2= θ (n) as 3n+2≥3n and 3n+2≤ 4n, for n k1=3,k2=4, and n0=2
Hence, the complexity of f (n) can be represented as θ (g(n)).
Computing time functions

DSU
Department of ISE BMS Institute of Technology and Mgmt 31
Values of some important functions as n → ∞

DSU
Department of ISE BMS Institute of Technology and Mgmt 32
Order of
growth
• Most important: Order of growth within
a constant multiple as n→∞

• Example:
– How much faster will the algorithm run on computer that
is twice as fast?

– How much longer does it take to solve problem of


double input size?

DSU
Department of ISE BMS Institute of Technology and Mgmt 33
Best-case, average-case, worst-case
For some algorithms efficiency depends on form of input:

• Worst case: Cworst(n) – maximum over inputs of size n


• Best case: Cbest(n) – minimum over inputs of size n
• Average case: Cavg(n) – “average” over inputs of size n

– Number of times the basic operation will be executed on


typical input.
– NOT the average of worst and best case.

BMS Institute of Technology and Mgmt 34


Recurrence Relation
A recurrence is an equation or inequality that describes a function in terms of its values on
smaller inputs. To solve a Recurrence Relation means to obtain a function defined on the
natural numbers that satisfy the recurrence.
For Example, the Worst Case Running Time T(n) of the MERGE SORT Procedures is
described by the recurrence.
T (n) = θ (1) if n=1

= 2T + θ (n) if n>1
There are four methods for solving Recurrence:
1.Substitution Method
2.Iteration Method
3.Recursion Tree Method
4.Master Method
Iteration Methods
It means to expand the recurrence and express it as a summation of terms of n and initial
condition.
Example1: Consider the Recurrence
T (n) = 1 if n=1
= 2T (n-1) if n>1
Example2: Consider the Recurrence
1.T (n) = T (n-1) +1 and T (1) = θ (1).

You might also like