0% found this document useful (0 votes)
16 views103 pages

Unit 1-1

Uploaded by

musicalabhi5
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
16 views103 pages

Unit 1-1

Uploaded by

musicalabhi5
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 103

UNIT 1

Introduction
Outline

 Notion of algorithm- different approaches/


algorithms for the same problem.
 Algorithmic problem solving- discuss several
important issues related to the design and analysis
of algorithms.
 Study a few problem types that have proven to be
particularly important to the study of algorithms
and their application.
 A review of fundamental data structures.

1-1
Facts

• The study of algorithms, sometimes called


algorithmics.
• Algorithms can be seen as special kinds of
solutions to problems, precisely defined
procedures for getting answers
• Donald Knuth – “A person well-trained in
computer science knows how to deal with
algorithms: how to construct them,
manipulate them, understand them, analyze
them.”
What is an algorithm?
An algorithm is a sequence of unambiguous
instructions for solving a problem, i.e., for obtaining
a required output for any legitimate input in a finite
amount of time.

problem

algorithm

input “computer” output

Algorithms + Data Structures = Programs 1-3


Characteristics of an Algorithm

• Can be represented various forms


• Unambiguity/clearness
• Effectiveness
• Finiteness/termination
• Correctness

1-4
Points to remember

 The nonambiguity requirement for each step of an


algorithm cannot be compromised.
 The range of inputs for which an algorithm works has to be
specified carefully.
 The same algorithm can be represented in several different
ways.
 There may exist several algorithms for solving the same
problem.
 Algorithms for the same problem can be based on very
different ideas and can solve the problem with dramatically
different speeds.

1-5
What is an algorithm?
 Recipe, process, method, technique, procedure, routine,…
with the following requirements:
1. Finiteness
 terminates after a finite number of steps
2. Definiteness
 rigorously and unambiguously specified
3. Clearly specified input
 valid inputs are clearly specified
4. Clearly specified/expected output
 can be proved to produce the correct output given a valid input
5. Effectiveness
 steps are sufficiently simple and basic

1-6
Why study algorithms?

 Theoretical importance

• the core of computer science

 Practical importance

• A practitioner’s toolkit of known algorithms

• Framework for designing and analyzing algorithms for


new problems
Example: Google’s PageRank Technology

1-7
Basic Issues Related to Algorithms
 How to design algorithms

 How to express algorithms

 Proving correctness

 Efficiency (or complexity) analysis


• Theoretical analysis
• Empirical analysis

 Optimality

1-8
Analysis of Algorithms

 How good is the algorithm?


• Correctness
• Time efficiency
• Space efficiency

 Does there exist a better algorithm?


• Lower bounds
• Optimality

1-9
Euclid’s Algorithm

Problem: Find gcd(m,n), the greatest common divisor of two


nonnegative, not both zero integers m and n

Examples: gcd(60,24) = 12, gcd(60,0) = 60, gcd(0,0) = ?

Euclid’s algorithm is based on repeated application of equality


gcd(m,n) = gcd(n, m mod n)
until the second number becomes 0, which makes the problem
trivial.

Example: gcd(60,24) = gcd(24,12) = gcd(12,0) = 12

1-10
Two descriptions of Euclid’s algorithm

Step 1 If n = 0, return m and stop; otherwise go to Step 2


Step 2 Divide m by n and assign the value of the remainder to r
Step 3 Assign the value of n to m and the value of r to n. Go to
Step 1.

while n ≠ 0 do
r ← m mod n
m← n
n←r
return m

1-11
Other methods for computing gcd(m,n)

Consecutive integer checking algorithm


Step 1 Assign the value of min{m,n} to t
Step 2 Divide m by t. If the remainder is 0, go to Step 3;
otherwise, go to Step 4
Step 3 Divide n by t. If the remainder is 0, return t and stop;
otherwise, go to Step 4
Step 4 Decrease t by 1 and go to Step 2

Is this slower than Euclid’s algorithm?


How much slower?

O(n), if n <= m , vs O(log n)


1-12
Other methods for gcd(m,n) [cont.]

Middle-school procedure
Step 1 Find the prime factorization of m
Step 2 Find the prime factorization of n
Step 3 Find all the common prime factors
Step 4 Compute the product of all the common prime factors
and return it as gcd(m,n)

Is this an algorithm?

How efficient is it?


Time complexity: O(sqrt(n))
1-13
Sieve of Eratosthenes – generating consecutive
primes not exceeding any given integer n > 1.

Input: Integer n ≥ 2
Output: List of primes less than or equal to n
for p ← 2 to n do A[p] ← p
for p ← 2 to n do
if A[p]  0 //p hasn’t been previously eliminated from the list
j ← p* p
while j ≤ n do
A[j] ← 0 //mark element as eliminated
j←j+p
Example: 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
Time complexity: O(n)
1-14
Example n= 25
Two main issues related to algorithms

 How to design algorithms

 How to analyze algorithm efficiency

1-16
Fundamentals of Algorithmic Problem Solving

1-17
Analysis of algorithms

 How good is the algorithm?


• time efficiency
• space efficiency
• correctness ignored in this course

 Does there exist a better algorithm?


• lower bounds
• optimality

1-18
Important problem types
 sorting

 searching

 string processing

 graph problems

 combinatorial problems

 geometric problems

 numerical problems

1-19
Sorting (I)
 Rearrange the items of a given list in ascending order.
• Input: A sequence of n numbers <a1, a2, …, an>
• Output: A reordering <a´1, a´2, …, a´n> of the input
sequence such that a´1≤ a´2 ≤ … ≤ a´n.

 Why sorting?
• Help searching
• Algorithms often use sorting as a key subroutine.

 Sorting key
• A specially chosen piece of information used to guide
sorting. E.g., sort student records by names.

1-20
Sorting (II)
 Examples of sorting algorithms
• Selection sort
• Bubble sort
• Insertion sort
• Merge sort
• Heap sort …

 Evaluate sorting algorithm complexity: the number of key comparisons.

 Two properties
• Stability: A sorting algorithm is called stable if it preserves the
relative order of any two equal elements in its input.
• In place : A sorting algorithm is in place if it does not require extra
memory, except, possibly for a few memory units.

1-21
Selection Sort
Algorithm SelectionSort(A[0..n-1])
//The algorithm sorts a given array by selection sort
//Input: An array A[0..n-1] of orderable elements
//Output: Array A[0..n-1] sorted in ascending order
for i  0 to n – 2 do
min  i
for j  i + 1 to n – 1 do
if A[j] < A[min]
min  j
swap A[i] and A[min]

1-22
Searching
 Find a given value, called a search key, in a given set.
 Examples of searching algorithms
• Sequential search
• Binary search …
Input: sorted array a_i < … < a_j and key x;
m (i+j)/2;
while i < j and x != a_m do
if x < a_m then j  m-1
else i  m+1;
if x = a_m then output a_m;

Time: O(log n)

1-23
String Processing

 A string is a sequence of characters from an alphabet.


 Text strings: letters, numbers, and special characters.
 String matching: searching for a given word/pattern in a
text.

Examples:
(i) searching for a word or phrase on WWW or in a
Word document
(ii) searching for a short read in the reference genomic
sequence

1-24
Graph Problems
 Informal definition
• A graph is a collection of points called vertices, some of
which are connected by line segments called edges.
 Modeling real-life problems
• Modeling WWW
• Communication networks
• Project scheduling …
 Examples of graph algorithms
• Graph traversal algorithms
• Shortest-path algorithms
• Topological sorting

1-25
Combinatorial Problems
 Problems that ask, explicitly or implicitly, to find a
combinatorial object—such as a permutation, a
combination, or a subset—that satisfies certain constraints
with some additional property such as a maximum value or
a minimum cost.
 Combinatorial problems are the most difficult problems in
computing, from both a theoretical and practical
standpoint because:
• the number of combinatorial objects typically grows extremely fast
with a problem’s size.
• there are no known algorithms for solving most such problems
exactly in an acceptable amount of time.

Example: TSP, Graph Coloring


1-26
Geometric Problems
• Geometric algorithms deal with geometric objects such as
points, lines, and polygons.

Example: Closest-pair problem and the convex-hull problem.

Numerical Problems
• Problems that involve mathematical objects of continuous
nature: solving equations and systems of equations,
computing definite integrals, evaluating functions, and so on.
• The majority of such mathematical problems can be solved
only approximately.
Fundamental data structures

 list  graph

• array  tree and binary tree

• linked list  set and dictionary

• string
 stack
 queue
 priority queue/heap

1-28
Linear Data Structures
 Arrays
• A sequence of n items of the same
data type that are stored
 Arrays
contiguously in computer memory  fixed length (need preliminary
and made accessible by specifying a reservation of memory)
value of the array’s index.  contiguous memory locations
 Linked List
 direct access
• A sequence of zero or more nodes
each containing two kinds of  Insert/delete
information: some data and one or  Linked Lists
more links called pointers to other
nodes of the linked list.  dynamic length
• Singly linked list (next pointer)  arbitrary memory locations
• Doubly linked list (next + previous  access by following links
pointers)  Insert/delete

a1 a2 … an .
1-29
Stacks and Queues

 Stacks
• A stack of plates
– insertion/deletion can be done only at the top.
– LIFO
• Two operations (push and pop)
 Queues
• A queue of customers waiting for services
– Insertion/enqueue from the rear and
deletion/dequeue from the front.
– FIFO
• Two operations (enqueue and dequeue)
Priority Queue and Heap
 Priority queues (implemented using heaps)
 A data structure for maintaining a set of elements,
each associated with a key/priority, with the
following operations
 Finding the element with the highest priority
 Deleting the element with the highest priority
 Inserting a new element 9
6 8
 Scheduling jobs on a shared computer 5 2 3

9 6 8 5 2 3
Graphs
 Formal definition
• A graph G = <V, E> is defined by a pair of two sets: a
finite set V of items called vertices and a set E of vertex
pairs called edges.
 Undirected and directed graphs (digraphs).
 What’s the maximum number of edges in an undirected
graph with |V| vertices?
 Complete, dense, and sparse graphs
• A graph with every pair of its vertices connected by an
edge is called complete

1 2

3 4
1-32
Graph Representation
 Adjacency matrix
• n x n boolean matrix if |V| is n.
• The element on the ith row and jth column is 1 if there’s an
edge from ith vertex to the jth vertex; otherwise 0.
• The adjacency matrix of an undirected graph is symmetric.
 Adjacency linked lists
• A collection of linked lists, one for each vertex, that contain all
the vertices adjacent to the list’s vertex.
 Which data structure would you use if the graph is a 100-node star
shape?

0111 2 3 4
0001 4
0001 4
0000

1-33
Weighted Graphs
 Weighted graphs
• Graphs or digraphs with numbers assigned to the edges.

5
1 2
6 7
9
3 8 4

1-34
1-34
Graph Properties -- Paths and Connectivity
 Paths
• A path from vertex u to v of a graph G is defined as a sequence of
adjacent (connected by an edge) vertices that starts with u and ends
with v.
• Simple paths: All edges of a path are distinct.
• Path lengths: the number of edges, or the number of vertices – 1.
 Connected graphs
• A graph is said to be connected if for every pair of its vertices u and
v there is a path from u to v.
 Connected component
• The maximum connected subgraph of a given graph.

1-35
Graph Properties -- Acyclicity
 Cycle
• A simple path of a positive length that starts and
ends a the same vertex.
 Acyclic graph
• A graph without cycles
• DAG (Directed Acyclic Graph)

1 2

3 4

1-36
Trees
 Trees
• A tree (or free tree) is a connected acyclic graph.
• Forest: a graph that has no cycles but is not necessarily
connected.
 Properties of trees
• For every two vertices in a tree there always exists exactly one
simple path from one of these vertices to the other. Why?
– Rooted trees: The above property makes it possible to
select an arbitrary vertex in a free tree and consider it as
the root of the so called rooted tree.
– Levels in a rooted tree. rooted
1 3 5 3
 |E| = |V| - 1
2 4 4 1 5
2
1-37
Rooted Trees (I)
 Ancestors
• For any vertex v in a tree T, all the vertices on the simple path
from the root to that vertex are called ancestors.
 Descendants
• All the vertices for which a vertex v is an ancestor are said to be
descendants of v.
 Parent, child and siblings
• If (u, v) is the last edge of the simple path from the root to
vertex v, u is said to be the parent of v and v is called a child of
u.
• Vertices that have the same parent are called siblings.
 Leaves
• A vertex without children is called a leaf.
 Subtree
• A vertex v with all its descendants is called the subtree of T
rooted at v.
1-38
Rooted Trees (II)
 Depth of a vertex
• The length of the simple path from the root to the vertex.
 Height of a tree
• The length of the longest simple path from the root to a leaf.

h=2
3
4 1 5
2

1-39
Ordered Trees
 Ordered trees
• An ordered tree is a rooted tree in which all the children of each
vertex are ordered.
 Binary trees
• A binary tree is an ordered tree in which every vertex has no more
than two children and each children is designated s either a left child
or a right child of its parent.
 Binary search trees
• Each vertex is assigned a number.
• A number assigned to each parental vertex is larger than all the
numbers in its left subtree and smaller than all the numbers in its
right subtree.
 log2n  h  n – 1, where h is the height of a binary tree and n the size.

9 6
6 8 3 9
5 2 3 2 5 8
1-40
Some Well-known Computational Problems

 Sorting
 Searching
 Shortest paths in a graph
 Minimum spanning tree
 Primality testing
 Traveling salesman problem
 Knapsack problem
 Chess
 Towers of Hanoi
 Program termination

Some of these problems don’t have efficient algorithms,


or algorithms at all!
1-41
Algorithm design techniques/strategies

 Brute force  Greedy approach

 Divide and conquer  Dynamic programming

 Decrease and conquer  Iterative improvement

 Transform and conquer  Backtracking

 Space and time tradeoffs  Branch and bound

1-42
Fundamentals of the Analysis of
Algorithm Efficiency

1-43
Analysis of algorithms

 Issues:
• correctness
• time efficiency
• space efficiency
• optimality

 Approaches:
• theoretical analysis
• empirical analysis
Space Complexity
S(P)=C+SP(I)
 Fixed Space Requirements (C)
Independent of the characteristics of the inputs
and outputs
• instruction space
• space for simple variables, fixed-size structured
variable, constants
 Variable Space Requirements (SP(I))
depend on the instance characteristic I
• number, size, values of inputs and outputs associated
with I
• recursive stack space, formal parameters, local
variables, return address
45
Measuring an Input’s Size

 As algorithms run longer on larger inputs, it is logical


to investigate an algorithm’s efficiency as a function
of some parameter indicating the algorithm’s input
size.
 The input is just one number, and it is this number’s
magnitude that determines the input size.

1-46
Units for Measuring Running Time
Units for measuring an algorithm’s running time using some
standard unit of time measurement—a second, or millisecond has
drawbacks due to the following:
 dependence on the speed of a particular computer
 dependence on the quality of a program implementing the
algorithm
 the compiler used in generating the machine code
 the difficulty of clocking the actual running time of the
program.

Hence, we need a metric that does not depend on these


extraneous factors.

1-47
Time Complexity
The metric used are:
Step count: that consider the time required by each and every
instructions in an algorithm.
• Determine the total number of steps contributed by each
statement
step per execution  frequency
• add up the contribution of all statements.

Operation count: identify the most important operation of the


algorithm, called the basic operation, the operation contributing
the most to the total running time, and compute the number of
times the basic operation is executed.

1-48
Iterative function to sum a list of numbers
steps/execution
Statement s/e Frequency Total steps
float sum(float list[ ], int n) 0 0 0
{ 0 0 0
float tempsum = 0; 1 1 1
int i; 0 0 0
for(i=0; i <n; i++) 1 n+1 n+1
tempsum += list[i]; 1 n n
return tempsum; 1 1 1
} 0 0 0
Total 2n+3
Recursive Function to sum of a list of numbers

Statement s/e Frequency Total steps


float rsum(float list[ ], int n) 0 0 0
{ 0 0 0
if (n) 1 n+1 n+1
return rsum(list, n-1)+list[n-1]; 1 n n
return list[0]; 1 1 1
} 0 0 0
Total 2n+2
Theoretical analysis of time efficiency
Time efficiency is analyzed by determining the number of
repetitions of the basic operation as a function of input size

 Basic operation: the operation that contributes most


towards the running time of the algorithm
input size

T(n) ≈ copC(n)
running time execution time Number of times
for basic operation basic operation is
executed
Input size and basic operation examples
Problem Input size measure Basic operation

Searching for key in a Number of list’s items,


Key comparison
list of n items i.e. n

Multiplication of two Matrix dimensions or Multiplication of two


matrices total number of elements numbers

Checking primality of n’size = number of digits


Division
a given integer n (in binary representation)

Visiting a vertex or
Typical graph problem #vertices and/or edges
traversing an edge
Physical time calculation in C++
 The clock() function in C++ (ctime.h) returns the approximate
processor time that is consumed by the program.
 In order to compute the processor time, the difference between
values returned by two different calls to clock(), one at the
start and other at the end of the program is used. To convert
the value to seconds, it needs to be divided by a macro
CLOCKS_PER_SEC.
 The clock() time may advance faster or slower than the actual
wall clock. It depends on how the operating system allocates
the resources for the process.
 If the processor is shared by other processes, the clock() time
may advance slower than the wall clock. While if the current
process is executed in a multithreaded system, the clock() time
may advance faster than wall clock.
<ctime> Functions

 C++ strftime()
 C++ mktime()
 C++ localtime()
 C++ gmtime()
 C++ ctime()
 C++ asctime()
 C++ time()
 C++ difftime()
 C++ clock()

1-54
clock() prototype
• clock_t clock();

It is defined in <ctime> header file.

clock() Parameters None


clock() Return value
• On success, the clock() function returns the
processor time used by the program till now.
• On failure, it returns -1 that is casted to the type
clock_t.
Example: How clock() function works
#include <iostream>
// Without pow function
#include <ctime>
#include <cmath>
time_req = clock();
using namespace std; for(int i=0; i<100000; i++)
{
int main () y = log(i*i*i*i*i);
{ }
float x,y; time_req = clock()- time_req;
clock_t time_req; cout << "Without using pow
function, it took " <<
// Using pow function
(float)time_req/CLOCKS_PER_SEC <<
time_req = clock();
for(int i=0; i<100000; i++)
" seconds" << endl;
{
y = log(pow(i,5));
}
time_req = clock() - time_req;
cout << "Using pow function, it took " << (float)time_req/CLOCKS_PER_SEC
<< " seconds" << endl;
1-56
Asymptotic Complexity
 Running time of an algorithm as a function of input size n for
large n.
 Written using Asymptotic Notation O, W, Q
For some algorithms efficiency depends on form of input:

 Worst case: Cworst(n) – maximum over inputs of size n


 Best case: Cbest(n) – minimum over inputs of size n
 Average case: Cavg(n) – “average” over inputs of size n
• Number of times the basic operation will be executed on typical input
• NOT the average of worst and best case
• Expected number of basic operations considered as a random
variable under some assumption about the probability distribution of
all possible inputs
O-notation
For function g(n), we define O(g(n)),
big-O of n, as the set:
O(g(n)) = {f(n) : iff
 positive constants c and n0,
such that n  n0,
we have 0  f(n)  cg(n) }
Intuitively: Set of all functions
whose rate of growth is the same as
or lower than that of g(n).
g(n) is an asymptotic upper bound for f(n).
f(n) = Q(g(n))  f(n) = O(g(n)).
Q(g(n))  O(g(n)).
Big-oh
Examples 2n + 10 is O(n)
• 2n + 10  cn
• (c  2) n  10
• n  10/(c  2)
7n-2 • Pick c = 3 and n0 = 10
7n-2 is O(n)
need c > 0 and n0  1 such that 7n-2  c•n for n  n0
this is true for c = 7 and n0 = 1

 3n3 + 20n2 + 5
3n3 + 20n2 + 5 is O(n3)
need c > 0 and n0  1 such that 3n3 + 20n2 + 5  c•n3 for n  n0
this is true for c = 4 and n0 = 21

 3 log n + 5
3 log n + 5 is O(log n)
need c > 0 and n0  1 such that 3 log n + 5  c•log n for n  n0
this is true for c = 8 and n0 = 2
W -notation
For function g(n), we define W(g(n)),
big-Omega of n, as the set:
W(g(n)) = {f(n) :
 positive constants c and n0,
such that n  n0,
we have 0  cg(n)  f(n)}
Intuitively: Set of all functions
whose rate of growth is the same
as or higher than that of g(n).
g(n) is an asymptotic lower bound for f(n).
f(n) = Q(g(n))  f(n) = W(g(n)).
Q(g(n))  W(g(n)).
Big-omega
Example
Q-notation
For function g(n), we define Q(g(n)),
big-Theta of n, as the set:
Q(g(n)) = {f(n) :
 positive constants c1, c2, and n0,
such that n  n0,
we have 0  c1g(n)  f(n)  c2g(n)
}
Intuitively: Set of all functions that
have the same rate of growth as g(n).

g(n) is an asymptotically tight bound for f(n).


Big-theta
Example
Relations Between Q, O, W
Relations Between Q, W, O
Theorem : For any two functions g(n) and f(n),
f(n) = Q(g(n)) iff
f(n) = O(g(n)) and f(n) = W(g(n)).

 I.e., Q(g(n)) = O(g(n))  W(g(n))

 Inpractice, asymptotically tight bounds are


obtained from asymptotic upper and lower
bounds.
1-69
This property imply for an algorithm that comprises two
consecutively executed parts, it implies that the
algorithm’s overall efficiency is determined by the part
with a higher order of growth, i.e., its least efficient part.

Eg: If in an program, two modules one for sorting and


other for searching whose time complexity is O(n2) and
O(n) respectively, then, the overall complexity of a
program is:

1-70
Basic asymptotic efficiency classes
1 constant

log n logarithmic

n linear

n log n n-log-n

n2 quadratic

n3 cubic

2n exponential

n! factorial
Order of Growth of Time Complexity

• Time Complexity/Order
of Growth defines the
amount of time taken
by any program with
respect to the size of
the input.

• Time Complexity is just


a function of size of its
input.

1-72
Values of some important functions as n  
Some properties of asymptotic order of growth

 f(n)  O(f(n))

 f(n)  O(g(n)) iff g(n) W(f(n))

 If f (n)  O(g (n)) and g(n)  O(h(n)) , then f(n)  O(h(n))

Note similarity with a ≤ b

 If f1(n)  O(g1(n)) and f2(n)  O(g2(n)) , then


f1(n) + f2(n)  O(max{g1(n), g2(n)})
Useful summation formulas and rules
Examples of Summation

1-76
Time efficiency of nonrecursive algorithms

General Plan for Analysis

 Decide on parameter n indicating input size

 Identify algorithm’s basic operation

 Determine worst, average, and best cases for input of size n

 Set up a sum for the number of times the basic operation is


executed

 Simplify the sum using standard formulas and rules


Example: Sequential search

 Worst case O(n)

 Best case O(1)

 Average case O(n)


Example 1: Maximum element
Example 2: Element uniqueness problem

Best-case situation:
If the two first elements of the array are the same, then we can exit
after one comparison. Best case = 1 comparison.
Time Complexity
Worst-case situation:
• The basic operation is the comparison in the inner loop. The
worst case happens for two-kinds of inputs:
– Arrays with no equal elements
– Arrays in which only the last two elements are the pair of
equal elements

1-81
Example 3: Matrix multiplication
Example 4: Counting binary digits

It cannot be investigated the way the previous examples are.


Plan for Analysis of Recursive Algorithms

 Decide on a parameter indicating an input’s size.

 Identify the algorithm’s basic operation.

 Check whether the number of times the basic op. is executed


may vary on different inputs of the same size. (If it may, the
worst, average, and best cases must be investigated
separately.)

 Set up a recurrence relation with an appropriate initial


condition expressing the number of times the basic op. is
executed.

 Solve the recurrence (or, at the very least, establish its


solution’s order of growth) by backward substitutions or
another method.
Example 1: Recursive evaluation of n!
Definition: n ! = 1  2  … (n-1)  n for n ≥ 1 and 0! = 1

Recursive definition of n!: F(n) = F(n-1)  n for n ≥ 1 and


F(0) = 1
Solving the recurrence for M(n)
Time complexity- n!
factorial(0) is only comparison (1 unit of time)
factorial(n) is 1 comparison, 1 multiplication, 1 subtraction and
time for factorial(n-1)
T(n) = T(n — 1) + 3
T(0) = 1
T(n) = T(n-1) + 3
= T(n-2) + 6
= T(n-3) + 9
= T(n-4) + 12
= ...
= T(n-k) + 3k
as we know T(0) = 1 we need to find the value of k for which n - k = 0, k = n
T(n) = T(0) + 3n , k = n
= 1 + 3n
Thus time complexity of O(n)
Example 2: The Tower of Hanoi Puzzle

1 3

Recurrence for number of moves:


Solving recurrence for number of moves
M(n) = 2M(n-1) + 1, M(1) = 1
Recursive Equation : T(n) = 2T(n-1) + 1 ——-equation-1

Solving it by Back substitution :


T(n-1) = 2T(n-2) + 1 ———–equation-2
T(n-2) = 2T(n-3) + 1 ———–equation-3

Put the value of T(n-2) in the equation–2 with help of equation-3


T(n-1)= 2( 2T(n-3) + 1 ) + 1 ——equation-4

Put the value of T(n-1) in equation-1 with help of equation-4


T(n)= 2( 2( 2T(n-3) + 1 ) + 1 ) + 1
T(n) = 2^3 T(n-3) + 2^2 + 2^1 + 1
Time complexity Contd…
After Generalization :
T(n)= 2^k T(n-k) + 2^{(k-1)} + 2^{(k-2)} + ............ +2^2 + 2^1 + 1

Base condition T(1) =1


n–k=1
k = n-1
put, k = n-1
T(n) =2^{(n-1)}T(0) + + 2^{(n-2)} + ............ +2^2 +2^1 + 1

It is a GP series, and the sum is 2^n - 1

T(n)= O( 2^n - 1) , or you can say O(2^n) which is exponential

1-90
T(n) =2n-1 T(0) + + 2n-2+ ............ +22 +21 + 1

It is a Geometric Progression Series with common ratio, r=2 First


term, a=1(20)

1-91
Tree of calls for the Tower of Hanoi Puzzle
Time complexity of TOH Problem

T(n) = 2*T(n-1) + 1

T(n) = 2 * ( 2 * T(n-2) + 1) + 1

T(n) = (2 ^ 2) * T(n-2) + 2^1 + 2^0

T(n) = (2^k) * T(n-k) + 2^(k-1) + 2^(k-2) + ... + 2^0

Solving this the closed from comes out to be

T(n) = (2^n) - 1 with T(0) = 0

Thus the Time Complexity is O(2n)


Example 3: Counting #bits
Time Complexity

1-95
Fibonacci numbers
The Fibonacci numbers:
0, 1, 1, 2, 3, 5, 8, 13, 21, …
The Fibonacci recurrence:
F(n) = F(n-1) + F(n-2)
F(0) = 0
F(1) = 1

General 2nd order linear homogeneous recurrence with


constant coefficients:
aX(n) + bX(n-1) + cX(n-2) = 0
Solving aX(n) + bX(n-1) + cX(n-2) = 0

 Set up the characteristic equation (quadratic)


ar2 + br + c = 0

 Solve to obtain roots r1 and r2

 General solution to the recurrence


if r1 and r2 are two distinct real roots: X(n) = αr1n + βr2n
if r1 = r2 = r are two equal real roots: X(n) = αrn + βnr n

 Particular solution can be found by using initial conditions


Application to the Fibonacci numbers
F(n) = F(n-1) + F(n-2) or F(n) - F(n-1) - F(n-2) = 0
Application to the Fibonacci numbers
Empirical analysis of time efficiency
Empirical analysis is an alternative to the mathematical
analysis of an algorithm’s efficiency that includes the following
steps:
 Understand the experiment’s purpose.
 Decide on the efficiency metric M to be measured and the
measurement unit(an operation count vs. a time unit).
 Decide on characteristics of the input sample (its range, size,
and so on).
 Prepare a program implementing the algorithm (or
algorithms) for the experimentation.
 Generate a sample of inputs.
 Run the algorithm (or algorithms) on the sample’s inputs
and record the dataobserved.
 Analyze the data obtained
Algorithm Visualization
• Algorithm visualization defined as the use of images to
convey some useful information about algorithms.
• That information can be a visual illustration of an
algorithm’s operation, of its performance on different
kinds of inputs, or of its execution speed versus that of
other algorithms for the same problem.
• An algorithm visualization uses graphic elements—
points, line segments, two- or three-dimensional bars,
and so on—to represent some “interesting events” in
the algorithm’s operation.
Algorithm Visualization
• There are two principal variations of algorithm
visualization:
1. Static algorithm visualization - algorithm’s
progress through a series of still images.

2. Dynamic algorithm visualization, also called


algorithm animation - shows a continuous, movie-like
presentation of an algorithm’s operations.

You might also like