Mastering Python - Sample Chapter
Mastering Python - Sample Chapter
P U B L I S H I N G
pl
C o m m u n i t y
$ 39.99 US
25.99 UK
Sa
m
Mastering Python
Mastering Python
ee
D i s t i l l e d
Mastering Python
Master the art of writing beautiful and powerful Python by using all
of the features that Python 3.5 offers
E x p e r i e n c e
Preface
Python is a language that is easy to learn and both powerful and convenient from the
start. Mastering Python, however, is a completely different question.
Every programming problem you will encounter has at least several possible
solutions and/or paradigms to apply within the vast possibilities of Python. This
book will not only illustrate a range of different and new techniques but also explain
where and when a method should be applied.
This book is not a beginner's guide to Python 3. It is a book that can teach you about
the more advanced techniques possible within Python. Specifically targeting Python
3.5 and up, it also demonstrates several Python 3.5-only features such as async def
and await statements.
As a Python programmer with many years of experience, I will attempt to rationalize
the choices made in this book with relevant background information. These
rationalizations are in no way strict guidelines, however. Several of these cases boil
down to personal style in the end. Just know that they stem from experience and are,
in many cases, the solutions recommended by the Python community.
Some of the references in this book might not be obvious to you if you are not a fan
of Monty Python. This book extensively uses spam and eggs instead of foo and bar in
code samples. To provide some background information, I recommend watching the
"Spam" sketch by Monty Python. It is positively silly!
Preface
Chapter 3, Containers and Collections Storing Data the Right Way, is where we use
the many containers and collections bundled with Python to create code that is fast
and readable.
Chapter 4, Functional Programming Readability Versus Brevity, covers functional
programming techniques such as list/dict/set comprehensions and lambda
statements that are available in Python. Additionally, it illustrates their similarities
with the mathematical principles involved.
Chapter 5, Decorators Enabling Code Reuse by Decorating, explains not only how to
create your own function/class decorators, but also how internal decorators such as
property, staticmethod, and classmethod work.
Chapter 6, Generators and Coroutines Infinity, One Step at a Time, shows how
generators and coroutines can be used to lazily evaluate structures of infinite size.
Chapter 7, Async IO Multithreading without Threads, demonstrates the usage of
asynchronous functions using async def and await so that external resources no
longer stall your Python processes.
Chapter 8, Metaclasses Making Classes (Not Instances) Smarter, goes deeper into the
creation of classes and how class behavior can be completely modified.
Chapter 9, Documentation How to Use Sphinx and reStructuredText, shows how
you can make Sphinx automatically document your code with very little effort.
Additionally, it shows how the Napoleon syntax can be used to document function
arguments in a way that is legible both in the code and the documentation.
Chapter 10, Testing and Logging Preparing for Bugs, explains how code can be tested
and how logging can be added to enable easy debugging in case bugs occur at a
later time.
Chapter 11, Debugging Solving the Bugs, demonstrates several methods of hunting
down bugs with the use of tracing, logging, and interactive debugging.
Chapter 12, Performance Tracking and Reducing Your Memory and CPU Usage, shows
several methods of measuring and improving CPU and memory usage.
Chapter 13, Multiprocessing When a Single CPU Core Is Not Enough, illustrates that
the multiprocessing library can be used to execute your code, not just on multiple
processors but even on multiple machines.
Preface
Chapter 14, Extensions in C/C++, System Calls, and C/C++ Libraries, covers the calling of
C/C++ functions for both interoperability and performance using Ctypes, CFFI, and
native C/C++.
Chapter 15, Packaging Creating Your Own Libraries or Applications, demonstrates
the usage of setuptools and setup.py to build and deploy packages on the Python
Package Index (PyPI).
Decorating functions
Decorating classes
Decorating functions
Essentially, a decorator is nothing more than a function or class wrapper. If we
have a function called spam and a decorator called eggs, then the following would
decorate spam with eggs:
spam = eggs(spam)
To make the syntax easier to use, Python has a special syntax for this case. So, instead
of adding a line such as the preceding one below the function, you can simply
decorate a function using the @ operator:
@eggs
def spam():
pass
Looking at the earlier example, we realize that this gets spam as the argument for
function and returns that function again, effectively changing nothing. Most
decorators nest functions, however. The following decorator will print all arguments
sent to spam and pass them to spam unmodified:
>>> import functools
@functools.wraps(function)
...
...
...
...
...
...
return _eggs
>>> @eggs
... def spam(a, b, c):
[ 104 ]
Chapter 5
...
return a * b + c
>>> spam(1, 2, 3)
'spam' got args: (1, 2, 3) and kwargs: {}
5
This should indicate how powerful decorators can be. By modifying *args and
**kwargs, you can add, modify and remove arguments completely. Additionally,
the return statement can be modified as well. Instead of return function(...),
you can return something completely different if you wish.
...
...
>>> @eggs
... def spam(a, b, c):
...
...
return a * b + c
>>> help(spam)
Help on function _eggs in module ...:
<BLANKLINE>
_eggs(*args, **kwargs)
<BLANKLINE>
>>> spam.__name__
'_eggs'
[ 105 ]
Now, our spam method has no documentation anymore and the name is gone. It has
been renamed to _eggs. Since we are indeed calling _eggs, this is understandable,
but it's very inconvenient for code that relies on this information. Now we will try
the same code with the minor difference; we will use functools.wraps:
>>> import functools
@functools.wraps(function)
...
...
...
return _eggs
>>> @eggs
... def spam(a, b, c):
...
...
return a * b + c
>>> help(spam)
Help on function spam in module ...:
<BLANKLINE>
spam(a, b, c)
The spam function Returns a * b + c
<BLANKLINE>
>>> spam.__name__
'spam'
Without any further changes, we now have documentation and the expected function
name. The working of functools.wraps is nothing magical though; it simply copies
and updates several attributes. Specifically, the following attributes are copied:
__doc__
__name__
__module__
__annotations__
__qualname__
[ 106 ]
Chapter 5
...
>>> output = spam(3)
Let's take our simple spam function and add some output so that we can see what
happens internally:
>>> def spam(eggs):
...
...
...
return output
...
>>> output = spam(3)
spam(3): 'spamspamspam'
While this works, wouldn't it be far nicer to have a little decorator that takes care of
this problem?
>>> def debug(function):
...
@functools.wraps(function)
...
...
output))
...
return output
...
return _debug
...
>>>
>>> @debug
... def spam(eggs):
...
...
>>> output = spam(3)
spam((3,), {}): 'spamspamspam'
Now we have a decorator that we can easily reuse for any function that prints the
input, output, and function name. This type of decorator can also be very useful for
logging applications, as we will see in Chapter 10, Testing and Logging Preparing for
Bugs. It should be noted that you can use this example even if you are not able to
modify the module containing the original code. We can wrap the function locally
and even monkey-patch the module if needed:
import some_module
# Regular call
some_module.some_function()
# Wrap the function
debug_some_function = debug(some_module.some_function)
# Call the debug version
debug_some_function()
# Monkey patch the original module
some_module.some_function = debug_some_function
# Now this calls the debug version of the function
some_module.some_function()
Naturally, monkey-patching is not a good idea in production code, but it can be very
useful when debugging.
[ 108 ]
Chapter 5
function.cache = dict()
...
...
@functools.wraps(function)
...
def _memoize(*args):
...
...
function.cache[args] = function(*args)
...
...
return function.cache[args]
return _memoize
>>> @memoize
... def fibonacci(n):
...
...
...
...
if n < 2:
return n
else:
return fibonacci(n - 1) + fibonacci(n - 2)
fibonacci 1: 1
fibonacci 2: 1
fibonacci 3: 2
fibonacci 4: 3
fibonacci 5: 5
fibonacci 6: 8
>>> fibonacci.__wrapped__.cache
{(5,): 5, (0,): 0, (6,): 8, (1,): 1, (2,): 1, (3,): 2, (4,): 3}
[ 109 ]
While this example would work just fine without any memoization, for larger
numbers, it would kill the system. For n=2, the function would execute fibonacci(n
- 1) and fibonacci(n - 2) recursively, effectively giving an exponential time
complexity. Also, effectively for n=30, the Fibonacci function is called 2,692,537 times
which is still doable nonetheless. At n=40, it is going to take you quite a very long
time to calculate.
The memoized version, however, doesn't even break a sweat and only needs to
execute 31 times for n=30.
This decorator also shows how a context can be attached to a function itself. In this
case, the cache property becomes a property of the internal (wrapped fibonacci)
function so that an extra memoize decorator for a different object won't clash with
any of the other decorated functions.
Note, however, that implementing the memoization function yourself is generally not
that useful anymore since Python introduced lru_cache (least recently used cache) in
Python 3.2. The lru_cache is similar to the preceding memoize function but a bit more
advanced. It only maintains a fixed (128 by default) cache size to save memory and
uses some statistics to check whether the cache size should be increased.
To demonstrate how lru_cache works internally, we will calculate
fibonacci(100), which would keep our computer busy until the end of the
universe without any caching. Moreover, to make sure that we can actually see how
many times the fibonacci function is being called, we'll add an extra decorator that
keeps track of the count, as follows:
>>> import functools
function.calls = 0
...
@functools.wraps(function)
...
...
function.calls += 1
...
...
Chapter 5
...
...
...
...
if n < 2:
return n
else:
return fibonacci(n - 1) + fibonacci(n - 2)
>>> fibonacci(100)
354224848179261915075
# The result from our counter function which is now wrapped both by
# our counter and the cache
>>> fibonacci.__wrapped__.__wrapped__.calls
101
You might wonder why we need only 101 calls with a cache size of 3. That's because
we recursively require only n - 1 and n - 2, so we have no need of a larger cache
in this case. With others, it would still be useful though.
Additionally, this example shows the usage of two decorators for a single function.
You can see these as the layers of an onion. The first one is the outer layer and it
works towards the inside. When calling fibonacci, lru_cache will be called first
because it's the first decorator in the list. Assuming there is no cache available yet,
the counter decorator will be called. Within the counter, the actual fibonacci
function will be called.
Returning the values works in the reverse order, of course; fibonacci returns its
value to counter, which passes the value along to lru_cache.
...
...
...
# decorator
...
def _add(function):
...
...
@functools.wraps(function)
...
def __add(n):
...
...
...
return __add
...
...
return _add
>>> @add(extra_n=2)
... def eggs(n):
...
return 'eggs' * n
>>> eggs(2)
'eggseggseggseggs'
Optional arguments are a different matter, however, because they make the extra
function layer optional. With arguments, you need three layers, but without
arguments, you need only two layers. Since decorators are essentially regular
functions that return functions, the difference would be to return the sub-function or
the sub-sub-function, based on the parameters. This leaves just one issuedetecting
whether the parameter is a function or a regular parameter. To illustrate, with the
parameters the actual call looks like the following:
add(extra_n=2)(eggs)(2)
To detect whether the decorator was called with a function or a regular argument
as a parameter, we have several options, none of which are completely ideal in
my opinion:
Chapter 5
In my opinion, the first oneusing keyword argumentsis the better of the two
options because it is somewhat more explicit and leaves less room for confusion.
The second option could be problematic if, for some reason, your argument is
callable as well.
Using the first method, the normal (non-keyword) argument has to be the decorated
function and the other two checks can still apply. We can still check whether the
function is indeed callable and whether there is only a single argument available.
Here is an example using a modified version of the previous example:
>>> import functools
...
...
...
...
...
default_kwargs = dict(n=1)
...
...
...
# decorator itself
...
def _add(function):
...
...
@functools.wraps(function)
...
def __add(n):
...
default_kwargs.update(kwargs)
...
...
...
return __add
...
...
...
...
# ourselves
...
...
return _add(args[0])
elif not args and kwargs:
...
...
# first argument
...
default_kwargs.update(kwargs)
...
return _add
...
else:
...
...
'keyword arguments')
>>> @add
... def spam(n):
...
return 'spam' * n
>>> @add(n=3)
... def eggs(n):
...
return 'eggs' * n
>>> spam(3)
'spamspamspamspam'
>>> eggs(2)
'eggseggseggseggseggs'
>>> @add(3)
... def bacon(n):
...
return 'bacon' * n
Whenever you have the choice available, I recommend that you either have a
decorator with arguments or without them, instead of having optional arguments.
However, if you have a really good reason for making the arguments optional, then
you have a relatively safe method of making this possible.
[ 114 ]
Chapter 5
...
self.function = function
...
...
functools.update_wrapper(self, function)
...
...
...
...
...
...
return output
>>> @Debug
... def spam(eggs):
...
...
>>> output = spam(3)
spam((3,), {}): 'spamspamspam'
The only notable difference between functions and classes is that functools.wraps
is now replaced with functools.update_wrapper in the __init__ method.
[ 115 ]
As is the case with regular functions, the class function decorator now gets passed
along self as the instance. Nothing unexpected!
[ 116 ]
Chapter 5
...
...
...
...
...
@classmethod
...
...
...
...
...
...
@staticmethod
...
...
...
[ 117 ]
Chapter 5
args: ()
kwargs: {}
>>> Spam.some_staticmethod(1, 2, a=3, b=4)
args: (1, 2)
kwargs: {'a': 3, 'b': 4}
...
...
...
...
...
...
more_spam = MoreSpam(5)
...
...
...
self.spam = spam
[ 119 ]
As you can see, whenever we set or get values from more_spam, it actually calls
__get__ or __set__ on MoreSpam. A very useful feat for automatic conversions and
type checking, the property decorator we will see in the next paragraph is just a
more convenient implementation of this technique.
Now that we know how descriptors work, we can continue with creating the
classmethod and staticmethod decorators. For these two, we simply need to
modify __get__ instead of __call__ so that we can control which type of instance
(or none at all) is passed along:
import functools
class ClassMethod(object):
def __init__(self, method):
self.method = method
def __get__(self, instance, cls):
@functools.wraps(self.method)
def method(*args, **kwargs):
return self.method(cls, *args, **kwargs)
return method
class StaticMethod(object):
def __init__(self, method):
self.method = method
def __get__(self, instance, cls):
return self.method
[ 120 ]
Chapter 5
def get_eggs(self):
...
print('getting eggs')
...
return self._eggs
...
...
...
...
self._eggs = eggs
...
...
def delete_eggs(self):
...
print('deleting eggs')
...
del self._eggs
...
...
...
...
@property
...
def spam(self):
...
print('getting spam')
...
return self._spam
...
[ 121 ]
@spam.setter
...
...
...
self._spam = spam
...
...
@spam.deleter
...
def spam(self):
...
print('deleting spam')
...
del self._spam
[ 122 ]
Chapter 5
def __get__(self, instance, cls):
if instance is None:
# Redirect class (not instance) properties to
# self
return self
elif self.fget:
return self.fget(instance)
else:
raise AttributeError('unreadable attribute')
def __set__(self, instance, value):
if self.fset:
self.fset(instance, value)
else:
raise AttributeError("can't set attribute")
def __delete__(self, instance):
if self.fdel:
self.fdel(instance)
else:
raise AttributeError("can't delete attribute")
def getter(self, fget):
return type(self)(fget, self.fset, self.fdel)
def setter(self, fset):
return type(self)(self.fget, fset, self.fdel)
def deleter(self, fdel):
return type(self)(self.fget, self.fset, fdel)
def __init__(self):
self.registry = {}
[ 123 ]
...
...
...
...
...
if key == 'registry':
...
...
else:
...
...
self.registry[key] = value
...
...
...
...
del self.registry[key]
The __getattr__ method looks for the key in instance.__dict__ first and is called
only if it does not exist. That's why we never see a __getattr__ for the registry
attribute. The __getattribute__ method is called in all cases, which makes it a
bit more dangerous to use. With the __getattribute__ method, you will need a
specific exclusion for registry since it will be executed recursively if you try to
access self.registry.
[ 124 ]
Chapter 5
There is rarely a need to look at descriptors, but they are used by several internal
Python processes, such as the super() method when inheriting classes.
Decorating classes
Python 2.6 introduced the class decorator syntax. As is the case with the function
decorator syntax, this is not really a new technique either. Even without the
syntax, a class can be decorated simply by executing DecoratedClass =
decorator(RegularClass). After the previous paragraphs, you should be familiar
with writing decorators. Class decorators are no different from regular ones, except
for the fact that they take a class instead of a function. As is the case with functions,
this happens at declaration time and not at instantiating/calling time.
Because there are quite a few alternative ways to modify how classes work, such
as standard inheritance, mixins, and metaclasses (more about that in Chapter 8,
Metaclasses Making Classes (Not Instances) Smarter), class decorators are never strictly
needed. This does not reduce their usefulness, but it does offer an explanation of
why you will most likely not see too many examples of class decorating in the wild.
instances = dict()
...
@functools.wraps(cls)
...
...
...
...
...
>>> @singleton
... class Spam(object):
...
def __init__(self):
[ 125 ]
print('Executing init')
>>> a = Spam()
Executing init
>>> b = Spam()
>>> a is b
True
>>> a.x = 123
>>> b.x
123
As you can see in the a is b comparison, both objects have the same identity, so
we can conclude that they are indeed the same object. As is the case with regular
decorators, due to the functools.wraps functionality, we can still access the original
class through Spam.__wrapped__ if needed.
The is operator compares objects by identity, which is implemented
as the memory address in CPython. If a is b returns True, we can
conclude that both a and b are the same instance.
Chapter 5
...
...
self.value = value
...
...
def __repr__(self):
...
...
...
...
...
...
...
...
...
...
...
...
...
...
>>> @functools.total_ordering
... class Egg(Value):
...
...
...
...
...
Now, you might be wondering, "Why isn't there a class decorator to make a class
sortable using a specified key property?" Well, that might indeed be a good idea for
the functools library but it isn't there yet. So let's see how we would implement
something like it:
>>> def sort_by_attribute(attr, keyfunc=getattr):
...
def _sort_by_attribute(cls):
...
...
...
...
...
...
[ 128 ]
Chapter 5
...
...
...
...
...
...
...
...
...
...
cls.__gt__ = __gt__
...
cls.__ge__ = __ge__
...
cls.__lt__ = __lt__
...
cls.__le__ = __le__
...
cls.__eq__ = __eq__
...
...
...
return cls
return _sort_by_attribute
...
self.value = value
...
...
def __repr__(self):
...
>>> @sort_by_attribute('value')
... class Spam(Value):
...
pass
[ 129 ]
Certainly, this greatly simplifies the making of a sortable class. And if you would
rather have your own key function instead of getattr, it's even easier. Simply
replace the getattr(self, attr) call with key_function(self), do that for other
as well, and change the argument for the decorator to your function. You can even
use that as the base function and implement sort_by_attribute by simply passing
a wrapped getattr function.
Useful decorators
In addition to the ones already mentioned in this chapter, Python comes bundled
with a few other useful decorators. There are some that aren't in the standard
library (yet?).
[ 130 ]
Chapter 5
The idea of single dispatch is that depending on the type you pass along, the correct
function is called. Since str + int results in an error in Python, this can be very
convenient to automatically convert your arguments before passing them to your
function. This can be useful to separate the actual workings of your function from the
type conversions.
Since Python 3.4, there is a decorator that makes it easily possible to implement the
single dispatch pattern in Python. For one of those cases that you need to handle a
specific type different from the normal execution. Here is the basic example:
>>> import functools
>>> @functools.singledispatch
... def printer(value):
...
>>> @printer.register(str)
... def str_printer(value):
...
print(value)
>>> @printer.register(int)
... def int_printer(value):
...
>>> @printer.register(dict)
... def dict_printer(value):
...
printer('dict:')
...
for k, v in sorted(value.items()):
...
printer('
>>> printer('spam')
spam
>>> printer([1, 2, 3])
other: [1, 2, 3]
>>> printer(123)
[ 131 ]
See how, depending on the type, the other functions were called? This pattern can be
very useful for reducing the complexity of a single function that takes several types
of argument.
When naming the functions, make sure that you do not
overwrite the original singledispatch function. If we had
named str_printer as just printer, it would overwrite the
initial printer function. This would make it impossible to
access the original printer function and make all register
operations after that fail as well.
>>> @functools.singledispatch
... def write_as_json(file, data):
...
json.dump(data, file)
>>> @write_as_json.register(str)
... @write_as_json.register(bytes)
... def write_as_json_filename(file, data):
...
...
write_as_json(fh, data)
[ 132 ]
Chapter 5
>>> write_as_json('test1.json', data)
>>> write_as_json(b'test2.json', 'w')
>>> with open('test3.json', 'w') as fh:
...
write_as_json(fh, data)
So now we have a single write_as_json function; it calls the right code depending
on the type. If it's an str or bytes object, it will automatically open the file and call
the regular version of write_as_json, which accepts file objects.
Writing a decorator that does this is not that hard to do, of course, but it's still
quite convenient to have it in the base library. It most certainly beats a couple of
isinstance calls in your function. To see which function will be called, you can use
the write_as_json.dispatch function with a specific type. When passing along
an str, you will get the write_as_json_filename function. It should be noted that
the name of the dispatched functions is completely arbitrary. They are accessible as
regular functions, of course, but you can name them anything you like.
To check the registered types, you can access the registry, which is a dictionary,
through write_as_json.registry:
>>> write_as_json.registry.keys()
dict_keys([<class 'bytes'>, <class 'object'>, <class 'str'>])
Let's just assume for now that the open function is not usable as a context manager
and that we need to build our own function to do this. The standard method of
creating a context manager is by creating a class that implements the __enter__ and
__exit__ methods, but that's a bit verbose. We can have it shorter and simpler:
>>> import contextlib
>>> @contextlib.contextmanager
... def open_context_manager(filename, mode='r'):
...
fh = open(filename, mode)
[ 133 ]
yield fh
...
fh.close()
Simple, right? However, I should mention that for this specific casethe closing of
objectsthere is a dedicated function in contextlib, and it is even easier to use.
Let's demonstrate it:
>>> import contextlib
>>> with contextlib.closing(open('test.txt', 'a')) as fh:
...
For a file object, this is of course not needed since it already functions as a context
manager. However, some objects such as requests made by urllib don't support
automatic closing in that manner and benefit from this function.
But wait; there's more! In addition to being usable in a with statement, the results
of a contextmanager are actually usable as decorators since Python 3.2. In older
Python versions, it was simply a small wrapper, but since Python 3.2 it's based on the
ContextDecorator class, which makes it a decorator. The previous decorator isn't really
suitable for that task since it yields a result (more about that in Chapter 6, Generators and
Coroutines Infinity, One Step at a Time), but we can think of other functions:
>>> @contextlib.contextmanager
... def debug(name):
...
...
yield
...
>>> @debug('spam')
... def spam():
...
>>> spam()
Debugging 'spam':
This is the inside of our spam function
End of debugging 'spam'
[ 134 ]
Chapter 5
There are quite a few nice use cases for this, but at the very least, it's just a convenient
way to wrap a function in a context without all the (nested) with statements.
Since Python 3.5 is not that common yet, here's a decorator that achieves the same with
more advanced type checking. To allow for this type of checking, some magic has to
be used, specifically the usage of the inspect module. Personally, I am not a great fan
of inspecting code to perform tricks like these, as they are easy to break. This piece of
code actually breaks when a regular decorator (one that doesn't copy argspec) is used
between the function and this decorator, but it's a nice example nonetheless:
>>> import inspect
>>> import functools
def _to_int(function):
...
...
# arguments
...
signature = inspect.signature(function)
...
...
...
...
...
@functools.wraps(function, ['__signature__'])
...
@functools.wraps(function)
...
...
...
...
...
[ 135 ]
...
default = signature.parameters[name].default
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
print('a', a)
...
print('b', b)
...
print('c', c)
Chapter 5
Traceback (most recent call last):
...
AssertionError: a should be at least 10, got: 1
>>> spam()
Traceback (most recent call last):
...
TypeError: 'a' parameter lacking default value
>>> spam('spam', {})
Traceback (most recent call last):
...
ValueError: invalid literal for int() with base 10: 'spam'
Because of the inspect magic, I'm still not sure whether I would recommend
using the decorator like this. Instead, I would opt for a simpler version that uses
no inspect whatsoever and simply parses the arguments from kwargs:
>>> import functools
def _to_int(function):
...
@functools.wraps(function)
...
def __to_int(**kwargs):
...
value = int(kwargs.get(name))
...
...
...
...
...
...
...
...
...
...
...
...
[ 137 ]
return function(**kwargs)
...
return __to_int
...
return _to_int
print('a', a)
...
print('b', b)
Chapter 5
def _ignore_warning(function):
@functools.wraps(function)
def __ignore_warning(*args, **kwargs):
# Execute the code while recording all warnings
with warnings.catch_warnings(record=True) as ws:
# Catch all warnings of this type
warnings.simplefilter('always', warning)
# Execute the function
result = function(*args, **kwargs)
# Now that all code was executed and the warnings
# collected, re-send all warnings that are beyond our
# expected number of warnings
if count is not None:
for w in ws[count:]:
warnings.showwarning(
message=w.message,
category=w.category,
filename=w.filename,
lineno=w.lineno,
file=w.file,
line=w.line,
)
return result
return __ignore_warning
return _ignore_warning
@ignore_warning(DeprecationWarning, count=1)
def spam():
warnings.warn('deprecation 1', DeprecationWarning)
warnings.warn('deprecation 2', DeprecationWarning)
Using this method, we can catch the first (expected) warning and still see the second
(not expected) warning.
[ 139 ]
Summary
This chapter showed us some of the places where decorators can be used to make
our code simpler and add some fairly complex behavior to very simple functions.
Truthfully, most decorators are more complex than the regular function would
have been by simply adding the functionality directly, but the added advantage of
applying the same pattern to many functions and classes is generally well worth it.
Decorators have so many uses to make your functions and classes smarter and more
convenient to use:
Debugging
Validation
The most important takeaway of this chapter should be to never forget functools.
wraps when wrapping a function. Debugging decorated functions can be rather
difficult because of (unexpected) behavior modification, but losing attributes as well
can make that problem much worse.
The next chapter will show us how and when to use generators and coroutines. This
chapter has already shown us the usage of the with statement slightly, but generators
and coroutines go much further with this. We will still be using decorators often
though, so make sure you have a good understanding of how they work.
[ 140 ]
www.PacktPub.com
Stay Connected: