0% found this document useful (0 votes)
64 views3 pages

Toh Challenge: Overview

1) The document discusses why the practical time results of implementing the Towers of Hanoi problem did not match the established theoretical time complexity of O(2^n - 1). 2) Factors like the time taken by print statements, background processes running on the computer, differences in constant factors, and machine-dependent timing results can cause practical times to differ from theoretical analysis. 3) To get practical times closer to the theory, the document recommends isolating the machine, precisely measuring basic operation times, and accounting for machine dependencies in the analysis.

Uploaded by

Praveen NaikG
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
64 views3 pages

Toh Challenge: Overview

1) The document discusses why the practical time results of implementing the Towers of Hanoi problem did not match the established theoretical time complexity of O(2^n - 1). 2) Factors like the time taken by print statements, background processes running on the computer, differences in constant factors, and machine-dependent timing results can cause practical times to differ from theoretical analysis. 3) To get practical times closer to the theory, the document recommends isolating the machine, precisely measuring basic operation times, and accounting for machine dependencies in the analysis.

Uploaded by

Praveen NaikG
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 3

ToH Challenge

NAME : PRAVEEN NAIK


Roll Number : 178
Email ID : [email protected]

OVERVIEW :
We did analyse the Towers of Hanoi problem in the class mathematically and
ended with the efficiency of 2^n – 1.We had taken ‘Disc Movement’ (printf statements) as
our basic operation and each recursion calls contribute to one basic operation. However,
when we implemented the same in the lab, the practical results were in no relation to
established theory.

WHY ?
As we know Towers of Hanoi’s problem includes recursion and it doesn’t include
any prominent operations other than printf statements . So we considered this as the basic
operation of the program . And it has efficiency of 2^n -1 which belongs to exponential
order of growth . During theoretical calculations we don’t consider the time taken by printf
statements or stdout But in practical these statements take more time than we expect
because it is library function . However these don’t disturb the practical value of time
efficiency because we multiply time taken by each operations to no. of operations(n) in
order to get the total time taken .
Actual reason for the difference between theoretical and practical values of time is the
value ‘n’. Even though we say computation time of a code varies as the exponential
function of n , it behaves differently depending on the value of n. Fact is that Time
complexity won't give you an estimate of the real running time.
Even if you have a different O(2^n) algorithm, depending on the coefficient in front of
the 2^1 for the real number of calculations, it could take shorter than an O(n) algorithm for
all useful inputs. During calculations we often neglect coefficients or added ,subtracted
element , but in real time they also play a role in actual running time . In our case , we
neglect -1 term from the efficiency which may contribute some amount to real time .
For some values of n it relates to derived theory. However , we have many software or
programs running background during ‘real time’ calculation.

Another reason :
As I mentioned above , running programs like mp3 player ,video player in background
might be one of the reason for change in the practical time taken. The processor
constantly jumps between the different processing "threads" or "instruction streams" to run
several concurrent programs under a real-time illusion called concurrency. The computer
ends up wasting processor cycles while switching between jobs and doesn't run at optimal
efficiency when multitasking.[1]
The problem with big-O notation is that the hidden constant can easily vary several
orders of magnitude. For example, accessing memory from RAM is about a hundred times
slower than multiplying two floating point numbers, but with big-O they are counted the
same.[2]

 The CPU can do about 1 arithmetic operation per cycle


 If you access memory sequentially or use less than 1MB of memory, you can
probably ignore memory access cost (memory will usually be in cache)
 If you access > 1MB memory randomly, figure on the order of 100 cycles per
memory access (you’ll get a lot of cache misses) [3]

Even when we are calculating time practically , it gives different ‘time taken’ for the
same input if we have other programs running in background.

The asymptotic complexity doesn’t tell us anything about absolute running time. It
tells us how will that running time grow, when the input size grows. Because when
we use asymptotic notation, we only care about the running time asymptotically with
respect to the input size.

How can we achieve the derived theory in practice ?


Keeping the ideal condition for the machine i.e. no programs running in
background would help to get nearer value of time . Calling standard functions rather
library function might help. Determining the running time of a function that calls itself
recursively requires more work than analyzing nonrecursive functions. The analysis for a
recursive function requires that we associate with each function F in a program an unknown
running time TF (n). This unknown function represents F’s running time as a function of n,
the size of F’s arguments.
We then establish an inductive definition, called a Recurrence recurrence relation for
TF (n), that relates TF (n) to functions of the form TG(k) for relation the other functions G in
the program and their associated argument sizes k. If F is directly recursive, then one or
more of the G’s will be the same as F. The value of TF (n) is normally established by an
induction on the argument size n. Thus, it is necessary to pick a notion of argument size
that guarantees functions are called with progressively smaller arguments as the recursion
proceeds.[4]
We need to first find the time taken for the basic operation i.e. printf . which varies for
different machines (32 ,64bit).we have to multiply the this with no. of operation ‘n’. So the
total answer might change with machines or IDE. However we will need some alternative
ways to get 100% accurate time.
REFERENCES :
[1] - https://www.techwalla.com/articles/parallel-vs-serial-processor
[2] - https://qr.ae/TWyfHs
[3] - Quora ( https://qr.ae/TWyfHs )
[4]-Sanford InfoLab (http://infolab.stanford.edu/~ullman/focs/ch03.pdf )

You might also like