Dsaa 68
Dsaa 68
The blue line in Fig. 2.4 shows how the PyList append method works when the +
operator is replaced by calling the list append method instead. At 100,000 elements
in the PyList we go from 27 s to add another element to maybe a second, but probably
less than that. That’s a nice speedup in our program. After making this change, the
PyList append method is given in Listing 2.4.
1 class PyList:
2 def __init__(self):
3 self.items = []
4
5 # The append method is used to add commands to the sequence.
6 def append(self,item):
7 self.items.append(item)
8
9 ...
Listing 2.4 Efficient Append
The algorithms we will study in this text will be of one of the complexities of O(1),
O(log n), O(n log n), O(n 2 ), or O(cn ). A graph of the shapes of these functions
appears in Fig. 2.5. Most algorithms have one of these complexities corresponding
to some factor of n. Constant values added or multiplied to the terms in a formula for
measuring the time needed to complete a computation do not affect the overall com-
plexity of that operation. Computational complexity is only affected by the highest
power term of the equation. The complexities graphed in Fig. 2.5 are of some power
n or the log of n, except for the really awful exponential complexity of O(cn ), where
c is some constant value.
As you are reading the text and encounter algorithms with differing complexities,
they will be one of the complexities shown in Fig. 2.5. As always, the variable n
represents the size of the data provided as input to the algorithm. The time taken
to process that data is the vertical axis in the graph. While we don’t care about the
exact numbers in this graph, we do care about the overall shape of these functions.
The flatter the line, the lower the slope, the better the algorithm performs. Clearly an
algorithm that has exponential complexity (i.e. O(cn )) or n-squared complexity (i.e.
O(n 2 )) complexity will not perform very well except for very small values of n. If
you know your algorithm will never be called for large values of n then an inefficient
algorithm might be acceptable, but you would have to be really sure that you knew
that your data size would always be small. Typically we want to design algorithms
that are as efficient as possible.
In subsequent chapters you will encounter sorting algorithms that are O(n 2 ) and
then you’ll learn that we can do better and achieve O(n log n) complexity. You’ll
see search algorithms that are O(n) and then learn how to achieve O(log n) com-
plexity. You’ll also learn a technique called hashing that will search in O(1) time.
The techniques you learn will help you deal with large amounts of data as efficiently