Python Scientific
Python Scientific
Python Scientific
Release 2011
Contents
I
1
2
3 3 5 6 8 9 9 15 16 20 25 32 32 36 39 40 40 40 56 76 87 91
EuroScipy tutorial team Editors: Valentin Haenel, Emmanuelle Gouillart, Gal Varoquaux http://scipy-lectures.github.com
NumPy: creating and manipulating numerical data 3.1 Intro . . . . . . . . . . . . . . . . . . . . . . . 3.2 1. Basics I . . . . . . . . . . . . . . . . . . . . 3.3 2. Basics II . . . . . . . . . . . . . . . . . . . . 3.4 3. Moving on . . . . . . . . . . . . . . . . . . . 3.5 4. Under the hood . . . . . . . . . . . . . . . . Getting help and nding documentation Matplotlib 5.1 Introduction . . . . . . . . 5.2 IPython . . . . . . . . . . . 5.3 pylab . . . . . . . . . . . . 5.4 Simple Plots . . . . . . . . 5.5 Properties . . . . . . . . . . 5.6 Text . . . . . . . . . . . . . 5.7 Ticks . . . . . . . . . . . . 5.8 Figures, Subplots, and Axes 5.9 Other Types of Plots . . . . 5.10 The Class Library . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4 5
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
Scipy : high-level scientic computing 6.1 Scipy builds upon Numpy . . . . . . . . . . . . 6.2 File input/output: scipy.io . . . . . . . . . . 6.3 Signal processing: scipy.signal . . . . . . 6.4 Special functions: scipy.special . . . . . . 6.5 Statistics and random numbers: scipy.stats 6.6 Linear algebra operations: scipy.linalg . . 6.7 Numerical integration: scipy.integrate . . 6.8 Fast Fourier transforms: scipy.fftpack . . 6.9 Interpolation: scipy.interpolate . . . . . 6.10 Optimization and t: scipy.optimize . . . 6.11 Image processing: scipy.ndimage . . . . . 6.12 Summary exercises on scientic computing . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
113 114 114 115 116 116 118 119 122 125 127 128 133
13.3 Figures and decorations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 268 13.4 Interaction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 271 14 Sympy : Symbolic Mathematics in Python 14.1 First Steps with SymPy . . . . . . . . 14.2 Algebraic manipulations . . . . . . . . 14.3 Calculus . . . . . . . . . . . . . . . . 14.4 Equation solving . . . . . . . . . . . . 14.5 Linear Algebra . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 272 273 274 275 276 277 279 280 281 283 285 287 288
15 scikit-learn: machine learning in Python 15.1 Loading an example dataset . . . . . . . . . . . . . . . . . . . . . . . 15.2 Supervised learning . . . . . . . . . . . . . . . . . . . . . . . . . . . 15.3 Clustering: grouping observations together . . . . . . . . . . . . . . . 15.4 Putting it all together : face recognition with Support Vector Machines Bibliography Index
II
7
Advanced topics
Advanced Python Constructs 7.1 Iterators, generator expressions and generators . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2 Decorators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.3 Context managers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Advanced Numpy 8.1 Life of ndarray . . . . . . . . . . . . . . . . . . . 8.2 Universal functions . . . . . . . . . . . . . . . . . 8.3 Interoperability features . . . . . . . . . . . . . . 8.4 Siblings: chararray, maskedarray, matrix 8.5 Summary . . . . . . . . . . . . . . . . . . . . . . 8.6 Contributing to Numpy/Scipy . . . . . . . . . . . Debugging code 9.1 Avoiding bugs . . . . . . . . . . . . . . 9.2 Debugging workow . . . . . . . . . . . 9.3 Using the Python debugger . . . . . . . . 9.4 Debugging segmentation faults using gdb . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
147
148 149 153 161 164 165 177 186 193 194 194 198 198 200 201 205 208 208 209 211 212 214 214 216 228 232 234 235 236 238 240 249 255
10 Optimizing code 10.1 Optimization workow . . . . 10.2 Proling Python code . . . . 10.3 Making code go faster . . . . 10.4 Writing faster numerical code 11 Sparse Matrices in SciPy 11.1 Introduction . . . . . . . 11.2 Storage Schemes . . . . . 11.3 Linear System Solvers . . 11.4 Other Interesting Packages . . . . . . . .
12 Image manipulation and processing using Numpy and Scipy 12.1 Opening and writing to image les . . . . . . . . . . . . 12.2 Displaying images . . . . . . . . . . . . . . . . . . . . . 12.3 Basic manipulations . . . . . . . . . . . . . . . . . . . . 12.4 Image ltering . . . . . . . . . . . . . . . . . . . . . . . 12.5 Feature extraction . . . . . . . . . . . . . . . . . . . . . 12.6 Measuring objects properties . . . . . . . . . . . . . . . .
13 3D plotting with Mayavi 264 13.1 A simple example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 264 13.2 3D plotting functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 265
ii
iii
Part I
Contents
Compiled languages: C, C++, Fortran, etc. Advantages: Very fast. Very optimized compilers. For heavy computations, its difcult to outperform these languages. Some very optimized scientic libraries have been written for these languages. Ex: blas (vector/matrix operations)
Drawbacks: Painful usage: no interactivity during development, mandatory compilation steps, verbose syntax (&, ::, }}, ; etc.), manual memory management (tricky in C). These are difcult languages for non computer scientists. Scripting languages: Matlab Advantages: Very rich collection of libraries with numerous algorithms, for many different domains. Fast execution because these libraries are often written in a compiled language. Pleasant development environment: comprehensive and well organized help, integrated editor, etc.
Commercial support is available. Drawbacks: Base language is quite poor and can become restrictive for advanced users. Not free. Other script languages: Scilab, Octave, Igor, R, IDL, etc. Advantages: Open-source, free, or at least cheaper than Matlab. Some features can be very advanced (statistics in R, gures in Igor, etc.) Drawbacks: fewer available algorithms than in Matlab, and the language is not more advanced. Some software are dedicated to one domain. Ex: Gnuplot or xmgrace to draw curves. These programs are very powerful, but they are restricted to a single type of usage, such as plotting. What about Python? Advantages: Very rich scientic computing libraries (a bit less than Matlab, though) Well-thought language, allowing to write very readable and well structured code: we code what we think. Many libraries for other tasks than scientic computing (web server management, serial port access, etc.) Free and open-source software, widely spread, with a vibrant community. Drawbacks: less pleasant development environment than, for example, Matlab. (More geek-oriented). Not all the algorithms that can be found in more specialized software or toolboxes.
1.1.2 Specications
Rich collection of already existing bricks corresponding to classical numerical methods or basic actions: we dont want to re-program the plotting of a curve, a Fourier transform or a tting algorithm. Dont reinvent the wheel! Easy to learn: computer science neither is our job nor our education. We want to be able to draw a curve, smooth a signal, do a Fourier transform in a few minutes. Easy communication with collaborators, students, customers, to make the code live within a labo or a company: the code should be as readable as a book. Thus, the language should contain as few syntax symbols or unneeded routines that would divert the reader from the mathematical or scientic understanding of the code. Efcient code that executes quickly... But needless to say that a very fast code becomes useless if we spend too much time writing it. So, we need both a quick development time and a quick execution time. A single environment/language for everything, if possible, to avoid learning a new software for each new problem.
http://code.enthought.com/projects/mayavi/
Getting help:
In [2]: print ? Type: builtin_function_or_method Base Class: <type builtin_function_or_method> String Form: <built-in function print> Namespace: Python builtin Docstring: print(value, ..., sep= , end=\n, file=sys.stdout) Prints the values to a stream, or to sys.stdout by default. Optional keyword arguments: file: a file-like object (stream); defaults to the current sys.stdout. sep: string inserted between values, default a space. end: string appended after the last value, default a newline.
CHAPTER 2
Now, you can run it in ipython and explore the resulting variables:
In [3]: %run my_file.py Hello word In [4]: s Out[4]: Hello word In [5]: %whos Variable Type Data/Info ---------------------------s str Hello word
authors Chris Burns, Christophe Combelles, Emmanuelle Gouillart, Gal Varoquaux Python for scientic computing
From a script to functions A script is not reusable, functions are. Thinking in terms of functions helps breaking the problem in small blocks.
We introduce here the Python language. Only the bare minimum necessary for getting started with Numpy and Scipy is addressed here. To learn more about the language, consider going through the excellent tutorial http://docs.python.org/tutorial. Dedicated books are also available, such as http://diveintopython.org/.
Python is a programming language, as are C, Fortran, BASIC, PHP, etc. Some specic features of Python are as follows: an interpreted (as opposed to compiled) language. Contrary to e.g. C or Fortran, one does not compile Python code before executing it. In addition, Python can be used interactively: many Python interpreters are available, from which commands and scripts can be executed. a free software released under an open-source license: Python can be used and distributed free of charge, even for building commercial software. multi-platform: Python is available for all major operating systems, Windows, Linux/Unix, MacOS X, most likely your mobile phone OS, etc. a very readable language with clear non-verbose syntax a language for which a large variety of high-quality packages are available for various applications, from web frameworks to scientic computing. a language very easy to interface with other languages, in particular C and C++. Some other features of the language are illustrated just below. For example, Python is an object-oriented language, with dynamic typing (the same variable can contain objects of different types during the course of a program). See http://www.python.org/about/ for more information about distinguishing features of Python.
>>> c = 2.1
and booleans:
>>> 3 > 4 False >>> test = (3 > 4) >>> test False >>> type(test) <type bool>
The message Hello, world! is then displayed. You just executed your rst Python instruction, congratulations! To get yourself started, type the following stack of instructions
>>> a = 3 >>> b = 2*a >>> type(b) <type int> >>> print b 6 >>> a*b 18 >>> b = hello >>> type(b) <type str> >>> b + b hellohello >>> 2*b hellohello
A Python shell can therefore replace your pocket calculator, with the basic arithmetic operations +, -, *, /, % (modulo) natively implemented:
>>> 7 * 3. 21.0 >>> 2**10 1024 >>> 8 % 3 2
Two variables a and b have been dened above. Note that one does not declare the type of an variable before assigning its value. In C, conversely, one should write:
int a = 3;
In addition, the type of a variable may change, in the sense that at one point in time it can be equal to a value of a certain type, and a second point in time, it can be equal to a value of a different type. b was rst equal to an integer, but it became equal to a string when it was assigned the value hello. Operations on integers (b=2*a) are coded natively in Python, and so are some operations on strings such as additions and multiplications, which amount respectively to concatenation and repetition.
>>> type(1) <type int> >>> type(1.) <type float> >>> type(1. + 0j ) <type complex> >>> a = 3 >>> type(a) <type int>
oats
Type conversion:
10
2.2.2 Containers
Python provides many efcient types of containers, in which collections of objects can be stored. Lists A list is an ordered collection of objects, that may have different types. For example
>>> l = [1, 2, 3, 4, 5] >>> type(l) <type list>
For collections of numerical data that all have the same type, it is often more efcient to use the array type provided by the numpy module. A NumPy array is a chunk of memory containing xed-sized items. With NumPy arrays, operations on elements can be faster because elements are regularly spaced in memory and more operations are perfomed through specialized C functions instead of Python loops. Python offers a large panel of functions to modify lists, or query them. Here are a few examples; for more details, see http://docs.python.org/tutorial/datastructures.html#more-on-lists Add and remove elements:
>>> >>> >>> [1, >>> 6 >>> [1, >>> >>> [1, >>> >>> [1, l = [1, 2, 3, 4, 5] l.append(6) l 2, 3, 4, 5, 6] l.pop() l 2, 3, 4, 5] l.extend([6, 7]) # extend l, in-place l 2, 3, 4, 5, 6, 7] l = l[:-2] l 2, 3, 4, 5]
Warning: Indexing starts at 0 (as in C), not at 1 (as in Fortran or Matlab)! Slicing: obtaining sublists of regularly-spaced elements
>>> [1, >>> [3, l 2, 3, 4, 5] l[2:4] 4]
Reverse l:
>>> r = l[::-1] >>> r [5, 4, 3, 2, 1]
Warning: Note that l[start:stop] contains the elements with indices i such as start<= i < stop (i ranging from start to stop-1). Therefore, l[start:stop] has (stop-start) elements. Slicing syntax: l[start:stop:stride] All slicing parameters are optional:
>>> [4, >>> [1, >>> [1, l[3:] 5] l[:3] 2, 3] l[::2] 3, 5]
Sort r (in-place):
>>> r.sort() >>> r [1, 2, 3, 4, 5]
Note: Methods and Object-Oriented Programming The notation r.method() (r.sort(), r.append(3), l.pop()) is our rst example of object-oriented programming (OOP). Being a list, the object r owns the method function that is called using the notation .. No further knowledge of OOP than understanding the notation . is necessary for going through this tutorial. Note: Discovering methods: In IPython: tab-completion (press tab)
In [28]: r. r.__add__ r.__class__ r.__contains__ r.__delattr__ r.__delitem__ r.__iadd__ r.__imul__ r.__init__ r.__iter__ r.__le__ r.__setattr__ r.__setitem__ r.__setslice__ r.__sizeof__ r.__str__
11
12
r.__delslice__ r.__doc__ r.__eq__ r.__format__ r.__ge__ r.__getattribute__ r.__getitem__ r.__getslice__ r.__gt__ r.__hash__
r.__len__ r.__lt__ r.__mul__ r.__ne__ r.__new__ r.__reduce__ r.__reduce_ex__ r.__repr__ r.__reversed__ r.__rmul__
r.__subclasshook__ r.append r.count r.extend r.index r.insert r.pop r.remove r.reverse r.sort
TypeError last)
/home/gouillar/travail/sgr/2009/talks/dakar_python/cours/gael/essai/source/<ipython console> in <module>() TypeError: str object does not support item assignment In [55]: a.replace(l, z, 1) Out[55]: hezlo, world! In [56]: a.replace(l, z) Out[56]: hezzo, worzd!
Strings have many useful methods, such as a.replace as seen above. Remember the a. object-oriented notation and use tab completion or help(str) to search for new methods. Note: Python offers advanced possibilities for manipulating strings, looking for patterns or formatting. Due to lack of time this topic is not addressed here, but the interested reader is referred to http://docs.python.org/library/stdtypes.html#string-methods and http://docs.python.org/library/string.html#newstring-formatting String substitution:
>>> An integer: %i; a float: %f; another string: %s % (1, 0.1, string) An integer: 1; a float: 0.100000; another string: string >>> i = 102 >>> filename = processing_of_dataset_%03d.txt%i >>> filename processing_of_dataset_102.txt
# tripling the quotes allows the # the string to span more than one line
In [1]: Hi, whats up ? -----------------------------------------------------------File "<ipython console>", line 1 Hi, whats up? ^ SyntaxError: invalid syntax
The newline character is \n, and the tab character is \t. Strings are collections like lists. Hence they can be indexed and sliced, using the same syntax and rules. Indexing:
>>> >>> h >>> e >>> o a = "hello" a[0] a[1] a[-1]
Dictionaries A dictionary is basically an efcient table that maps keys to values. It is an unordered container:
>>> tel = {emmanuelle: 5752, sebastian: 5578} >>> tel[francis] = 5915 >>> tel {sebastian: 5578, francis: 5915, emmanuelle: 5752} >>> tel[sebastian] 5578 >>> tel.keys() [sebastian, francis, emmanuelle] >>> tel.values() [5578, 5915, 5752] >>> francis in tel True
(Remember that negative indices correspond to counting from the right end.) Slicing:
>>> a = "hello, world!" >>> a[3:6] # 3rd to 6th (excluded) elements: elements 3, 4, 5 lo, >>> a[2:10:2] # Syntax: a[start:stop:step] lo o >>> a[::3] # every three characters, from beginning to end hl r!
It can be used to conveniently store and retrieve values associated with a name (a string for a date, a name, etc.). See http://docs.python.org/tutorial/datastructures.html#dictionaries for more information. A dictionary can have keys (resp. values) with different types:
>>> d = {a:1, b:2, 3:hello} >>> d {a: 1, 3: hello, b: 2}
in
Unicode
strings
(see
A string is an immutable object and it is not possible to modify its contents. One may however create new strings from the original one.
In [53]: a = "hello, world!" In [54]: a[2] = z ---------------------------------------------------------------------------
More container types Tuples Tuples are basically immutable lists. The elements of a tuple are written between parentheses, or just separated by commas:
13
14
>>> t = 12345, 54321, hello! >>> t[0] 12345 >>> t (12345, 54321, hello!) >>> u = (0, 2)
A bag of Ipython tricks Several Linux shell commands work in Ipython, such as ls, pwd, cd, etc. To get help about objects, functions, etc., type help object. Just type help() to get started. Use tab-completion as much as possible: while typing the beginning of an objects name (variable, function, module), press the Tab key and Ipython will complete the expression to match available names. If many names are possible, a list of names is displayed. History: press the up (resp. down) arrow to go through all previous (resp. next) instructions starting with the expression on the left of the cursor (put the cursor at the beginning of the line to go through all previous commands) You may log your session by using the Ipython magic command %logstart. Your instructions will be saved in a le, that you can execute as a script in a different session.
the key concept here is mutable vs. immutable mutable objects can be changed in place immutable objects cannot be modied once created A very good and detailed explanation of the above issues can be found in David M. Beazleys article Types and Objects in Python.
In [1]: %logstart commands.log Activating auto-logging. Current session state plus future input saved. Filename : commands.log Mode : backup Output logging : False Raw input log : False Timestamping : False State : active
2.4.1 if/elif/else
In [1]: if 2**2 == 4: ...: print Obvious! ...: Obvious!
Blocks are delimited by indentation Type the following lines in your Python interpreter, and be careful to respect the indentation depth. The Ipython shell automatically increases the indentation depth after a column : sign; to decrease the indentation depth, go four spaces to the left with the Backspace key. Press the Enter key twice to leave the logical block.
In [2]: a = 10 In [3]: if a == 1: ...: print(1) ...: elif a == 2: ...: print(2) ...: else: ...: print(A lot) ...: A lot
Indentation is compulsory in scripts as well. As an exercise, re-type the previous lines with the same indentation in a script condition.py, and execute the script with run condition.py in Ipython. 15 2.4. Control Flow 16
2.4.2 for/range
Iterating with an index:
In [4]: for i in range(4): ...: print(i) ...: 0 1 2 3
Evaluates to False: any number equal to zero (0, 0.0, 0+0j) an empty container (list, tuple, set, dictionary, ...) False, None Evaluates to True: everything else 1 a == b Tests equality, with logics:
In [19]: 1 == 1. Out[19]: True
2.4.3 while/break/continue
Typical C-style while loop (Mandelbrot problem):
In [6]: z = 1 + 1j In [7]: while abs(z) < 100: ...: z = z**2 + 1 ...: In [8]: z Out[8]: (-134+352j)
User-dened classes can customize those rules by overriding the special __nonzero__ method.
18
print word
Exercise Compute the decimals of Pi using the Wallis formula: =2 4i2 4i2 1 i=1
Few languages (in particular, languages for scientic computing) allow to loop over anything but integers/indices. With Python it is possible to loop exactly over the objects of interest without bothering with indices you often dont care about. Warning: Not safe to modify the sequence you are iterating over.
Keeping track of enumeration number Common task is to iterate over a sequence while keeping track of the item number. Could use while loop with a counter as above. Or a for loop:
In [13]: for i in range(0, len(words)): ....: print(i, words[i]) ....: ....: 0 cool 1 powerful 2 readable
Note: By default, functions return None. Note: Note the syntax to dene a function: the def keyword; is followed by the functions name, then the arguments of the function are given between brackets followed by a colon. the function body ; and return object for optionally returning values.
2.5.3 Parameters
Mandatory parameters (positional arguments)
In [81]: def double_it(x): ....: return x * 2 ....: In [82]: double_it(3)
19
20
Out[82]: 6 In [83]: double_it() --------------------------------------------------------------------------TypeError Traceback (most recent call last) /Users/cburns/src/scipy2009/scipy_2009_tutorial/source/<ipython console> in <module>() TypeError: double_it() takes exactly 1 argument (0 given)
but it is good practice to use the same ordering as the functions denition. Keyword arguments are a very convenient feature for dening functions with a variable number of arguments, especially when default values are to be used in most calls to the function.
Keyword arguments allow you to specify default values. Warning: Default values are evaluated when the function is dened, not when it is called.
In [124]: bigx = 10 In [125]: def double_it(x=bigx): .....: return x * 2 .....: In [126]: bigx = 1e9 In [128]: double_it() Out[128]: 20 # Now really big
Functions have a local variable table. Called a local namespace. The variable x only exists within the function foo.
But these global variables cannot be modied within the function, unless declared global in the function. This doesnt work:
21
22
In [117]: def setx(y): .....: x = y .....: print(x is %d % x) .....: .....: In [118]: setx(10) x is 10 In [120]: x Out[120]: 5
Namespace: Interactive File: /Users/cburns/src/scipy2009/.../<ipython console> Definition: funcname(params) Docstring: Concise one-line sentence describing the function. Extended summary which can contain multiple paragraphs.
Note: Docstring guidelines For the sake of standardization, the Docstring Conventions webpage documents the semantics and conventions associated with Python docstrings. Also, the Numpy and Scipy modules have dened a precised standard for documenting scientic functions, that you may want to follow for your own functions, with a Parameters section, an Examples section, etc. See http://projects.scipy.org/numpy/wiki/CodingStyleGuidelines#docstring-standard and http://projects.scipy.org/numpy/browser/trunk/doc/example.py#L37
This works:
In [121]: def setx(y): .....: global x .....: x = y .....: print(x is %d % x) .....: .....: In [122]: setx(10) x is 10 In [123]: x Out[123]: 10
In [38]: va = variable_args In [39]: va(three, x=1, y=2) args is (three,) kwargs is {y: 2, x: 1}
2.5.9 Methods
Methods are functions attached to objects. Youve seen these in our examples on lists, dictionaries, strings, etc...
2.5.10 Exercises
Exercise: Quicksort
2.5.7 Docstrings
Documentation about what the function does and its parameters. General convention:
In [67]: def funcname(params): ....: """Concise one-line sentence describing the function. ....: ....: Extended summary which can contain multiple paragraphs. ....: """ ....: # function body ....: pass ....: In [68]: funcname ? Type: function Base Class: <type function> String Form: <function funcname at 0xeaa0f0>
23
24
Exercise: Fibonacci sequence Write a function that displays the n rst terms of the Fibonacci sequence, dened by: u_0 = 1; u_1 = 1 u_(n+2) = u_(n+1) + u_n
import sys print sys.argv $ python file.py test arguments [file.py, test, arguments]
Note: Dont implement option parsing yourself. Use modules such as optparse.
2.6.1 Scripts
Let us rst write a script, that is a le with a sequence of instructions that are executed each time the script is called. Instructions may be e.g. copied-and-pasted from the interpreter (but take care to respect indentation rules!). The extension for Python les is .py. Write or copy-and-paste the following lines in a le called test.py
message = "Hello how are you?" for word in message.split(): print word
Let us now execute the script interactively, that is inside the Ipython interpreter. This is maybe the most common use of scripts in scientic computing. in Ipython, the syntax to execute a script is %run script.py. For example,
In [1]: %run test.py Hello how are you? In [2]: message Out[2]: Hello how are you?
And also:
In [4]: from os import listdir
Importing shorthands:
In [5]: import numpy as np
Warning:
from os import *
The script has been executed. Moreover the variables dened in the script (such as message) are now available inside the interpeters namespace. Other interpreters also offer the possibility to execute scripts (e.g., execfile in the plain Python interpreter, etc.). It is also possible In order to execute this script as a standalone program, by executing the script inside a shell terminal (Linux/Mac console or cmd Windows console). For example, if we are in the same directory as the test.py le, we can execute this in a console:
epsilon:~/sandbox$ python test.py Hello how are you?
Do not do it. Makes the code harder to read and understand: where do symbols come from? Makes it impossible to guess the functionality by the context and the name (hint: os.name is the name of the OS), and to prot usefully from tab completion. Restricts the variable names you can use: os.name might override name, or vise-versa. Creates possible name clashes between modules. Makes the code impossible to statically check for undened symbols. Modules are thus a good way to organize code in a hierarchical way. Actually, all the scientic computing tools we are going to use are modules:
>>> import numpy as np # data arrays >>> np.linspace(0, 10, 6) array([ 0., 2., 4., 6., 8., 10.]) >>> import scipy # scientific computing
25
26
In [8]: demo. demo.__builtins__ demo.__class__ demo.__delattr__ demo.__dict__ demo.__doc__ demo.__file__ demo.__format__ demo.__getattribute__ demo.__hash__
c = 2 d = 2
In this le, we dened two functions print_a and print_b. Suppose we want to call the print_a function from the interpreter. We could execute the le as a script, but since we just want to have access to the function print_a, we are rather going to import it as a module. The syntax is as follows.
In [1]: import demo
Importing the module gives access to its objects, using the module.object syntax. Dont forget to put the modules name before the objects name, otherwise Python wont recognize the instruction. Introspection
In [4]: demo ? Type: module Base Class: <type module> String Form: <module demo from demo.py> Namespace: Interactive File: /home/varoquau/Projects/Python_talks/scipy_2009_tutorial/source/demo.py Docstring: A demo module.
Warning: Module caching Modules are cached: if you modify demo.py and re-import it in the old session, you will get the old one. Solution:
In [10]: reload(demo)
In [5]: who demo In [6]: whos Variable Type Data/Info -----------------------------demo module <module demo from demo.py> In [7]: dir(demo)
Running it:
In [13]: %run demo2 b a
This method is not very robust, however, because it makes the code less portable (user-dependent path) and because you have to add the directory to your sys.path each time you want to import from a module in this directory. See http://docs.python.org/tutorial/modules.html for more information about modules.
2.6.6 Packages
A directory that contains many modules is called a package. A package is a module with submodules (which can have submodules themselves, etc.). A special le called __init__.py (which may be empty) tells Python that the directory is a Python package, from which modules can be imported.
sd-2116 /usr/lib/python2.6/dist-packages/scipy $ ls [17:07] cluster/ io/ README.txt@ stsci/ __config__.py@ LATEST.txt@ setup.py@ __svn_version__.py@ __config__.pyc lib/ setup.pyc __svn_version__.pyc constants/ linalg/ setupscons.py@ THANKS.txt@ fftpack/ linsolve/ setupscons.pyc TOCHANGE.txt@ __init__.py@ maxentropy/ signal/ version.py@ __init__.pyc misc/ sparse/ version.pyc INSTALL.txt@ ndimage/ spatial/ weave/ integrate/ odr/ special/ interpolate/ optimize/ stats/ sd-2116 /usr/lib/python2.6/dist-packages/scipy $ cd ndimage [17:07] sd-2116 /usr/lib/python2.6/dist-packages/scipy/ndimage $ ls [17:07] doccer.py@ fourier.pyc interpolation.py@ morphology.pyc doccer.pyc info.py@ interpolation.pyc _nd_image.so setupscons.py@ filters.py@ info.pyc measurements.py@ _ni_support.py@ setupscons.pyc filters.pyc __init__.py@ measurements.pyc _ni_support.pyc fourier.py@ __init__.pyc morphology.py@ setup.py@
setup.pyc
tests/
From Ipython:
In [1]: import scipy In [2]: scipy.__file__ Out[2]: /usr/lib/python2.6/dist-packages/scipy/__init__.pyc In [3]: import scipy.version In [4]: scipy.version.version Out[4]: 0.7.0 In [5]: import scipy.ndimage.morphology In [6]: from scipy.ndimage import morphology In [17]: morphology.binary_dilation ? Type: function Base Class: <type function> String Form: <function binary_dilation at 0x9bedd84>
Modules must be located in the search path, therefore you can: write your own modules within directories already dened in the search path (e.g. /usr/local/lib/python2.6/dist-packages). You may use symbolic links (on Linux) to keep the code somewhere else. modify the environment variable PYTHONPATH to include the directories containing the user-dened modules. On Linux/Unix, add the following line to a le read by the shell at startup (e.g. /etc/prole, .prole)
export PYTHONPATH=$PYTHONPATH:/home/emma/user_defined_modules
On Windows, http://support.microsoft.com/kb/310519 explains how to handle environment variables. 2.6. Reusing code: scripts and modules 29
30
Namespace: Interactive File: /usr/lib/python2.6/dist-packages/scipy/ndimage/morphology.py Definition: morphology.binary_dilation(input, structure=None, iterations=1, mask=None, output=None, border_value=0, origin=0, brute_force=False) Docstring: Multi-dimensional binary dilation with the given structure. An output array can optionally be provided. The origin parameter controls the placement of the filter. If no structuring element is provided an element is generated with a squared connectivity equal to one. The dilation operation is repeated iterations times. If iterations is less than 1, the dilation is repeated until the result does not change anymore. If a mask is given, only those elements with a true value at the corresponding mask element are modified at each iteration.
To read from a le
In [1]: f = open(workfile, r) In [2]: s = f.read()
All this indentation business can be a bit confusing in the beginning. However, with the clear indentation, and in the absence of extra characters, the resulting code is very nice to read compared to other languages. Indentation depth: Inside your text editor, you may choose to indent with any positive number of spaces (1, 2, 3, 4, ...). However, it is considered good practice to indent with 4 spaces. You may congure your editor to map the Tab key to a 4-space indentation. In Python(x,y), the editor Scite is already congured this way. Style guidelines Long lines: you should not write very long lines that span over more than (e.g.) 80 characters. Long lines can be broken with the \ character
>>> long_line = "Here is a very very long line \ ... that we break in two parts."
File modes Read-only: r Write-only: w Note: Create a new le or overwrite existing le. Append a le: a Read and Write: r+ Binary mode: b Note: Use for binary les, especially on Windows.
Spaces Write well-spaced code: put whitespaces after commas, around arithmetic operators, etc.:
>>> a = 1 # yes >>> a=1 # too cramped
A certain number of rules for writing beautiful code (and more importantly using the same conventions as anybody else!) are given in the Style Guide for Python Code. Use meaningful object names
31
32
The Python Standard Library documentation: http://docs.python.org/library/index.html Python Essential Reference, David Beazley, Addison-Wesley Professional
List a directory:
In [31]: os.listdir(os.curdir) Out[31]: [.index.rst.swo, .python_language.rst.swp, .view_array.py.swp, _static, _templates, basic_types.rst, conf.py, control_flow.rst, debugging.rst, ...
Make a directory:
In [32]: os.mkdir(junkdir) In [33]: junkdir in os.listdir(os.curdir) Out[33]: True
my_file.py~ pi_wallis_image.py
Delete a le:
In [44]: fp = open(junk.txt, w) In [45]: fp.close() In [46]: junk.txt in os.listdir(os.curdir) Out[46]: True In [47]: os.remove(junk.txt)
33
34
Environment variables:
In [9]: import os In [11]: os.environ.keys() Out[11]: [_, FSLDIR, TERM_PROGRAM_VERSION, FSLREMOTECALL, USER, HOME, PATH, PS1, SHELL, EDITOR, WORKON_HOME, PYTHONPATH, ... In [12]: os.environ[PYTHONPATH] Out[12]: .:/Users/cburns/src/utils:/Users/cburns/src/nitools: /Users/cburns/local/lib/python2.5/site-packages/: /usr/local/lib/python2.5/site-packages/: /Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5 In [16]: os.getenv(PYTHONPATH) Out[16]: .:/Users/cburns/src/utils:/Users/cburns/src/nitools: /Users/cburns/local/lib/python2.5/site-packages/: /usr/local/lib/python2.5/site-packages/: /Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5
sys.path is a list of strings that species the search path for modules. Initialized from PYTHONPATH:
In [121]: sys.path Out[121]: [, /Users/cburns/local/bin, /Users/cburns/local/lib/python2.5/site-packages/grin-1.1-py2.5.egg, /Users/cburns/local/lib/python2.5/site-packages/argparse-0.8.0-py2.5.egg, /Users/cburns/local/lib/python2.5/site-packages/urwid-0.9.7.1-py2.5.egg, /Users/cburns/local/lib/python2.5/site-packages/yolk-0.4.1-py2.5.egg, /Users/cburns/local/lib/python2.5/site-packages/virtualenv-1.2-py2.5.egg, ...
In [3]: pickle.dump(l, file(test.pkl, w)) In [4]: pickle.load(file(test.pkl)) Out[4]: [1, None, Stan]
Exercise Write a program to search your PYTHONPATH for the module site.py. path_site
2.9.1 Exceptions
Exceptions are raised by errors in Python:
In [1]: 1/0 --------------------------------------------------------------------------ZeroDivisionError: integer division or modulo by zero In [2]: 1 + e --------------------------------------------------------------------------TypeError: unsupported operand type(s) for +: int and str In [3]: d = {1:1, 2:2} In [4]: d[3] --------------------------------------------------------------------------KeyError: 3 In [5]: l = [1, 2, 3] In [6]: l[4] --------------------------------------------------------------------------IndexError: list index out of range In [7]: l.foobar --------------------------------------------------------------------------AttributeError: list object has no attribute foobar
Important for resource management (e.g. closing a le) Easier to ask for forgiveness than for permission
In [11]: def print_sorted(collection): ....: try: ....: collection.sort() ....: except AttributeError: ....: pass ....: print(collection) ....: ....: In [12]: print_sorted([1, 3, 2]) [1, 2, 3] In [13]: print_sorted(set((1, 3, 2))) set([1, 2, 3]) In [14]: print_sorted(132) 132
In [17]: filter_name(Stfan) --------------------------------------------------------------------------UnicodeDecodeError: ascii codec cant decode byte 0xc3 in position 2: ordinal not in rang
37
38
Use exceptions to notify certain conditions are met (e.g. StopIteration) or not (e.g. custom error raising)
authors Emmanuelle Gouillart, Didrik Pinte, Gal Varoquaux, and Pauli Virtanen
3.1 Intro
3.1.1 What is Numpy
Python has: built-in: lists, integers, oating point for numerics more is needed (efciency, convenience) Numpy is: extension package to Python for multidimensional arrays closer to hardware (efciency) designed for scientic computation (convenience) For example: An array containing discretized time of an experiment/simulation signal recorded by a measurement device pixels of an image ...
In the previous example, the Student class has __init__, set_age and set_major methods. Its attributes are name, age and major. We can call these methods and attributes with the following notation: classinstance.method or classinstance.attribute. The __init__ constructor is a special method we call with: MyClass(init parameters if any). Now, suppose we want to create a new class MasterStudent with the same methods and attributes as the previous one, but with an additional internship attribute. We wont copy the previous class, but inherit from it:
>>> class MasterStudent(Student): ... internship = mandatory, from March to June ... >>> james = MasterStudent(james) >>> james.internship mandatory, from March to June >>> james.set_age(23) >>> james.age 23
The MasterStudent class inherited from the Student attributes and methods. Thanks to classes and object-oriented programming, we can organize code with different classes corresponding to different objects we encounter (an Experiment class, an Image class, a Flow class, etc.), with their own methods and attributes. Then we can use inheritance to consider variations around a base class and re-use code. Ex : from a Flow base class, we can create derived StokesFlow, TurbulentFlow, PotentialFlow, etc.
3.2 1. Basics I
3.2.1 Getting started
>>> import numpy as np >>> a = np.array([0, 1, 2, 3]) >>> a array([0, 1, 2, 3])
39
40
>>> c = np.array([[[1], [2]], [[3], [4]]]) >>> c array([[[1], [2]], [[3], [4]]]) >>> c.shape (2, 2, 1)
or by number of points:
>>> c = >>> c array([ >>> d = >>> d array([ np.linspace(0, 1, 6) # start, end, num-points
0. , 0.2, 0.4, 0.6, 0.8, 1. ]) np.linspace(0, 1, 5, endpoint=False) 0. , 0.2, 0.4, 0.6, 0.8])
Common arrays:
>>> a = np.ones((3, 3)) # reminder: (3, 3) is a tuple >>> a array([[ 1., 1., 1.], [ 1., 1., 1.], [ 1., 1., 1.]]) >>> b = np.zeros((2, 2)) >>> b array([[ 0., 0.], [ 0., 0.]]) >>> c = np.eye(3) >>> c array([[ 1., 0., 0.], [ 0., 1., 0.], [ 0., 0., 1.]]) >>> d = np.diag(np.array([1, 2, 3, 4, 5])) >>> d array([[1, 0, 0, 0, 0], [0, 2, 0, 0, 0], [0, 0, 3, 0, 0], [0, 0, 0, 4, 0], [0, 0, 0, 0, 5]])
3.2. 1. Basics I
41
3.2. 1. Basics I
42
>>> c = np.random.rand(3, 3) >>> c array([[ 0.31976645, 0.64807526, [ 0.8280203 , 0.8669403 , [ 0.11527489, 0.11494884,
Now that we have our rst data arrays, we are going to visualize them. Matplotlib is a 2D plotting package. We can import its functions as below:
>>> import matplotlib.pyplot as plt >>> # ... or ... >>> from matplotlib.pyplot import * # the tidy way # imports everything in the namespace
>>> d = np.random.zipf(1.5, size=(2, 8)) # Zipf distribution (s=1.5) >>> d array([[5290, 1, 6, 9, 1, 1, 1, 2], [ 1, 5, 1, 13, 1, 1, 2, 1]]) >>> np.random.seed(1234) >>> np.random.rand(3) array([ 0.19151945, 0.62210877, >>> np.random.seed(1234) >>> np.random.rand(5) array([ 0.19151945, 0.62210877, # Setting the random seed 0.43772774])
If you launched Ipython with python(x,y), or with ipython -pylab (under Linux), both of the above commands have been run. In the remainder of this tutorial, we assume you have run
>>> import matplotlib.pyplot as plt
0.43772774,
1D plotting
>>> >>> >>> >>> >>> x = np.linspace(0, 3, 20) y = np.linspace(0, 9, 20) plt.plot(x, y) # line plot plt.plot(x, y, o) # dot plot plt.show() # <-- shows the plot (not needed with Ipython)
9 8 7 6
Much of the time you dont necessarily need to care, but remember they are there.
3.2. 1. Basics I
43
3.2. 1. Basics I
44
0 5 10 15 20 25 0
>>> >>> >>> >>> plt.pcolor(image) plt.hot() plt.colorbar() plt.show()
30 0.9 25 20 15 10 5 5 10 15 20 25
See Also: More on matplotlib in the tutorial by Mike Mller tomorrow! 3D plotting For 3D visualization, we can use another package: Mayavi. A quick example: start with relaunching iPython with these options: ipython -pylab -wthread (or ipython pylab=wx in IPython >= 0.10).
In [59]: from enthought.mayavi import mlab In [60]: mlab.figure() get fences failed: -1 param: 6, val: 0 Out[60]: <enthought.mayavi.core.scene.Scene object at 0xcb2677c> In [61]: mlab.surf(image) Out[61]: <enthought.mayavi.modules.surface.Surface object at 0xd0862fc> In [62]: mlab.axes() Out[62]: <enthought.mayavi.modules.axes.Axes object at 0xd07892c>
00
10
15
20
25
30
3.2. 1. Basics I
45
3.2. 1. Basics I
46
Slicing
Arrays, like other Python sequences can also be sliced:
>>> a = np.arange(10) >>> a array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9]) >>> a[2:9:3] # [start:end:step] array([2, 5, 8])
start:end:step is a slice object which represents the set of indexes range(start, end, step). A slice can be explicitly created: The mayavi/mlab window that opens is interactive : by clicking on the left mouse button you can rotate the image, zoom with the mouse wheel, etc. For more information on Mayavi : http://code.enthought.com/projects/mayavi/docs/development/html/mayavi/index.html
>>> sl = slice(1, 9, 2) >>> a = np.arange(10) >>> b = np.arange(1, 20, 2) >>> a, b (array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9]), array([ 1, >>> a[sl], b[sl] (array([1, 3, 5, 7]), array([ 3, 7, 11, 15]))
3,
5,
7,
All three slice components are not required: by default, start is 0, end is the last and step is 1:
>>> a[1:3] array([1, 2]) >>> a[::2] array([0, 2, 4, 6, 8]) >>> a[3:] array([3, 4, 5, 6, 7, 8, 9])
Warning: Indices begin at 0, like other Python sequences (and C/C++). In contrast, in Fortran or Matlab, indices begin at 1. For multidimensional arrays, indexes are tuples of integers:
>>> a = np.diag(np.arange(5)) >>> a array([[0, 0, 0, 0, 0], [0, 1, 0, 0, 0], [0, 0, 2, 0, 0], [0, 0, 0, 3, 0], [0, 0, 0, 0, 4]]) >>> a[1,1] 1 >>> a[2,1] = 10 # third line, second column >>> a array([[ 0, 0, 0, 0, 0], [ 0, 1, 0, 0, 0], [ 0, 10, 2, 0, 0], [ 0, 0, 0, 3, 0], [ 0, 0, 0, 0, 4]]) >>> a[1] array([0, 1, 0, 0, 0])
Note that: In 2D, the rst dimension corresponds to rows, the second to columns. for multidimensional a,a[0] is interpreted by taking all elements in the unspecied dimensions.
3.2. 1. Basics I
47
3.2. 1. Basics I
48
>>> data = np.loadtxt(populations.txt) # if in current directory >>> data array([[ 1900., 30000., 4000., 51300.], [ 1901., 47200., 6100., 48200.], [ 1902., 70200., 9800., 41500.], ... >>> np.savetxt(pop2.txt, data) >>> data2 = np.loadtxt(pop2.txt)
Note: If you have a complicated text le, what you can try are: np.genfromtxt Using Pythons I/O functions and e.g. regexps for parsing (Python is quite well suited for this) Navigating the lesystem in Python shells Ipython
In [1]: pwd # show current directory /home/user/stuff/2011-numpy-tutorial In [2]: cd ex /home/user/stuff/2011-numpy-tutorial/ex In [3]: ls populations.txt species.txt
Python (heres yet one reason to use Ipython for interactive use :)
>>> import os >>> os.getcwd() /home/user/stuff/2011-numpy-tutorial >>> os.chdir(ex) >>> os.getcwd() /home/user/stuff/2011-numpy-tutorial/ex >>> os.listdir(.) [populations.txt, species.txt, ...
8,
9])
Images
>>> img = plt.imread(../../data/elephant.png) >>> img.shape, img.dtype ((200, 300, 3), dtype(float32)) >>> plt.imshow(img) >>> plt.savefig(plot.png) >>> plt.show()
This behavior can be surprising at rst sight... but it allows to save both memory and time.
3.2. 1. Basics I
49
3.2. 1. Basics I
50
50
50
100
100
150
150
50
100
150
200
250
50
100
150
200
250
Other libraries:
>>> >>> >>> >>> from scipy.misc import imsave imsave(tiny_elephant.png, img[::6,::6]) plt.imshow(plt.imread(tiny_elephant.png), interpolation=nearest) plt.show()
3.2. 1. Basics I
51
3.2. 1. Basics I
52
0 5 10 15 20 25 30 0 10 20 30 40
Compute prime numbers in 099, with a sieve Construct a shape (100,) boolean array is_prime, lled with True in the beginning:
>>> is_prime = np.ones((100,), dtype=bool)
For each integer j starting from 2, cross out its higher multiples
>>> N_max = int(np.sqrt(len(is_prime))) >>> for j in range(2, N_max): ... is_prime[2*j::j] = False
Skim through help(np.nonzero), and print the prime numbers Follow-up: Move the above code into a script le named prime_sieve.py Run it to check it works Convert the simple sieve to the sieve of Eratosthenes: 1. Skip j which are already known to not be primes 2. The rst number to cross out is j 2
3.2. 1. Basics I
53
3.2. 1. Basics I
54
3.3 2. Basics II
3.3.1 Elementwise operations
# <- if you use it
With scalars:
>>> a = np.array([1, 2, 3, 4]) >>> a + 1 array([2, 3, 4, 5]) >>> 2**a array([ 2, 4, 8, 16])
Comparisons:
>>> a = np.array([1, 2, 3, 4]) >>> b = np.array([4, 2, 2, 4]) >>> a == b array([False, True, False, True], dtype=bool) >>> a > b array([False, False, True, False], dtype=bool)
Logical operations:
>>> a = >>> b = >>> a | array([ >>> a & array([ np.array([1, 1, 0, 0], dtype=bool) np.array([1, 0, 1, 0], dtype=bool) b True, True, True, False], dtype=bool) b True, False, False, False], dtype=bool)
Note: For arrays: & and | for logical operations, not and and or. Shape mismatches:
3.2. 1. Basics I
55
3.3. 2. Basics II
56
>>> a array([1, 2, 3, 4]) >>> a + np.array([1, 2]) Traceback (most recent call last): File "<stdin>", line 1, in <module> ValueError: shape mismatch: objects cannot be broadcast to a single shape
Transpose:
>>> a.T array([[ 0., [ 1., [ 1., 0., 0., 1., 0.], 0.], 0.]])
>>> x = np.array([[1, 1], [2, 2]]) >>> x array([[1, 1], [2, 2]]) >>> x.sum(axis=0) # columns (first dimension) array([3, 3]) >>> x[:,0].sum(), x[:,1].sum() (3, 3) >>> x.sum(axis=1) # rows (second dimension) array([2, 4]) >>> x[0,:].sum(), x[1,:].sum() (2, 4)
Other reductions works the same way (and take axis=) Statistics:
>>> x = np.array([1, 2, 3, 1]) >>> y = np.array([[1, 2, 3], [5, 6, 1]]) >>> x.mean() 1.75 >>> np.median(x) 1.5 >>> np.median(y, axis=-1) # last axis array([ 2., 5.])
Eigenvalues:
>>> np.linalg.eigvals(A) array([ 1., 2., 3.])
3.3. 2. Basics II
57
3.3. 2. Basics II
58
Example: data statistics Data in populations.txt describes the populations of hares and lynxes (and carrots) in northern Canada during 20 years. We can rst plot the data:
>>> data = np.loadtxt(../../data/populations.txt) >>> year, hares, lynxes, carrots = data.T # trick: columns to variables >>> >>> >>> >>> plt.axes([0.2, 0.1, 0.5, 0.8]) plt.plot(year, hares, year, lynxes, year, carrots) plt.legend((Hare, Lynx, Carrot), loc=(1.05, 0.5)) plt.show()
Extrema:
>>> x = np.array([1, 3, 2]) >>> x.min() 1 >>> x.max() 3 >>> x.argmin() 0 >>> x.argmax() 1 # index of minimum # index of maximum
80000 70000 60000 50000 40000 30000 20000 10000 0 1900 1905 1910 1915 1920
Logical operations:
>>> np.all([True, True, False]) False >>> np.any([True, True, False]) True
42400.
])
3.3. 2. Basics II
59
3.3. 2. Basics II
60
3.3.4 Broadcasting
What is the typical distance from the origin of a random walker after t left or right jumps? Basic operations on numpy arrays (addition, etc.) are elementwise This works on arrays of the same size. Nevertheless, Its also possible to do operations on arrays of different sizes if Numpy can transform these arrays so that they all have the same size: this conversion is called broadcasting. The image below gives an example of broadcasting:
>>> n_stories = 1000 # number of walkers >>> t_max = 200 # time during which we follow the walker
16 14 12 10 8 6 4 2 00
Lets verify:
>>> a = np.tile(np.arange(0, 40, 10), (3, 1)).T >>> a array([[ 0, 0, 0], [10, 10, 10], [20, 20, 20], [30, 30, 30]]) >>> b = np.array([0, 1, 2]) >>> a + b array([[ 0, 1, 2], [10, 11, 12], [20, 21, 22], [30, 31, 32]])
(x)2
An useful trick:
50
100
t
150
200
61
3.3. 2. Basics II
3.3. 2. Basics II
62
(4,) >>> a = a[:,np.newaxis] >>> a.shape (4, 1) >>> a array([[ 0], [10], [20], [30]]) >>> a + b array([[ 0, 1, 2], [10, 11, 12], [20, 21, 22], [30, 31, 32]])
Good practices
# adds a new axis -> 2D array
Explicit variable names (no need of a comment to explain what is in the variable) Style: spaces after commas, around =, etc. A certain number of rules for writing beautiful code (and, more importantly, using the same conventions as everybody else!) are given in the Style Guide for Python Code and the Docstring Conventions page (to manage help strings). Except some rare cases, variable names and comments in English. A lot of grid-based or network-based problems can also use broadcasting. For instance, if we want to compute the distance from the origin of points on a 10x10 grid, we can do:
>>> x, y = np.arange(5), np.arange(5) >>> distance = np.sqrt(x**2 + y[:, np.newaxis]**2) >>> distance array([[ 0. , 1. , 2. , 3. , [ 1. , 1.41421356, 2.23606798, 3.16227766, [ 2. , 2.23606798, 2.82842712, 3.60555128, [ 3. , 3.16227766, 3.60555128, 4.24264069, [ 4. , 4.12310563, 4.47213595, 5. ,
Or in color:
>>> >>> >>> >>> plt.pcolor(distance) plt.colorbar() plt.axis(equal) plt.show()
Broadcasting seems a bit magical, but it is actually quite natural to use it when we want to solve a problem whose output data is an array with more dimensions than input data. Example Lets construct an array of distances (in miles) between cities of Route 66: Chicago, Springeld, Saint-Louis, Tulsa, Oklahoma City, Amarillo, Santa Fe, Albuquerque, Flagstaff and Los Angeles.
>>> mileposts = np.array([0, 198, 303, 736, 871, 1175, 1475, 1544, ... 1913, 2448]) >>> distance_array = np.abs(mileposts - mileposts[:,np.newaxis]) >>> distance_array array([[ 0, 198, 303, 736, 871, 1175, 1475, 1544, 1913, 2448], [ 198, 0, 105, 538, 673, 977, 1277, 1346, 1715, 2250], [ 303, 105, 0, 433, 568, 872, 1172, 1241, 1610, 2145], [ 736, 538, 433, 0, 135, 439, 739, 808, 1177, 1712], [ 871, 673, 568, 135, 0, 304, 604, 673, 1042, 1577], [1175, 977, 872, 439, 304, 0, 300, 369, 738, 1273], [1475, 1277, 1172, 739, 604, 300, 0, 69, 438, 973], [1544, 1346, 1241, 808, 673, 369, 69, 0, 369, 904], [1913, 1715, 1610, 1177, 1042, 738, 438, 369, 0, 535], [2448, 2250, 2145, 1712, 1577, 1273, 973, 904, 535, 0]])
6 5 4 3 2 1 0 1 1 0 1 2 3 4 5 6
5.4 4.8 4.2 3.6 3.0 2.4 1.8 1.2 0.6 0.0
Remark : the numpy.ogrid function allows to directly create vectors x and y of the previous example, with two signicant dimensions:
3.3. 2. Basics II
63
3.3. 2. Basics II
64
>>> x, y = np.ogrid[0:5, 0:5] >>> x, y (array([[0], [1], [2], [3], [4]]), array([[0, 1, 2, 3, 4]])) >>> x.shape, y.shape ((5, 1), (1, 5)) >>> distance = np.sqrt(x**2 + y**2)
Or,
>>> b = a.reshape((6, -1)) # unspecified (-1) value is inferred
So, np.ogrid is very useful as soon as we have to handle computations on a grid. On the other hand, np.mgrid directly provides matrices full of indices for cases where we cant (or dont want to) benet from broadcasting:
>>> x, y = >>> x array([[0, [1, [2, [3, >>> y array([[0, [0, [0, [0, np.mgrid[0:4, 0:4] 0, 1, 2, 3, 1, 1, 1, 1, 0, 1, 2, 3, 2, 2, 2, 2, 0], 1], 2], 3]]) 3], 3], 3], 3]])
Copies or views
ndarray.reshape may return a view (cf help(np.reshape))), not a copy:
>>> b[0,0] = 99 >>> a array([99, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35])
Beware!
>>> a = np.zeros((3,2)) >>> b = a.T.reshape(3*2) >>> b[0] = 9 >>> a array([[ 0., 0.], [ 0., 0.], [ 0., 0.]])
Dimension shufing
>>> >>> (4, >>> 5 >>> >>> (3, >>> 5 a = np.arange(4*3*2).reshape(4, 3, 2) a.shape 3, 2) a[0,2,1] b = a.transpose(1, 2, 0) b.shape 2, 4) b[2,1,0]
Reshaping
The inverse operation to attening:
>>> a.shape (2, 3) >>> b = a.ravel() >>> b.reshape((2, 3)) array([[1, 2, 3], [4, 5, 6]])
Resizing
Size of an array can be changed with ndarray.resize:
>>> a = np.arange(4) >>> a.resize((8,))
3.3. 2. Basics II
65
3.3. 2. Basics II
66
Case 2.b: Block matrices and vectors (and tensors) Vector space: quantum level spin = 1 2 , 1 = 1 1 2 = 2 2
In short: for block matrices and vectors, it can be useful to preserve the block structure. In Numpy:
>>> psi = np.zeros((2, 2)) # dimensions: level, spin >>> psi[0,1] # <-- psi_{1,downarrow}
Linear operators on such block vectors have similar block structure: H= h11 V V h22 , h11 =
1,
0
1,
...
>>> H = np.zeros((2, 2, 2, 2)) # dimensions: level1, level2, spin1, spin2 >>> h_11 = H[0,0,:,:] >>> V = H[0,1]
Doing the matrix product: get rid of the block structure, do the 4x4 matrix product, then put it back H
>>> def mdot(operator, psi): ... return operator.transpose(0, 2, 1, 3).reshape(4, 4).dot( ... psi.reshape(4)).reshape(2, 2)
I.e., reorder dimensions rst to level1, spin1, level2, spin2 and then reshape => correct matrix product.
Masks
>>> np.random.seed(3) >>> a = np.random.random_integers(0, 20, 15) >>> a array([10, 3, 8, 0, 19, 10, 11, 9, 10, 6, 0, 20, 12, 7, 14]) >>> (a % 3 == 0) array([False, True, False, True, False, False, False, True, False, True, True, False, True, False, False], dtype=bool) >>> mask = (a % 3 == 0) >>> extract_from_a = a[mask] # or, a[a%3==0] >>> extract_from_a # extract a sub-array with the mask array([ 3, 0, 9, 6, 0, 12])
Extracting a sub-array using a mask produces a copy of this sub-array, not a view like slicing:
>>> extract_from_a[0] = -1 >>> a array([10, 3, 8, 0, 19, 10, 11,
9, 10,
6,
0, 20, 12,
7, 14])
3.3. 2. Basics II
67
3.3. 2. Basics II
68
Indexing with a mask can be very useful to assign a new value to a sub-array:
>>> a[a % 3 == 0] = -1 >>> a array([10, -1, 8, -1, 19, 10, 11, -1, 10, -1, -1, 20, -1,
7, 14])
Indexing can be done with an array of integers, where the same index is repeated several time:
>>> a[[2, 3, 2, 4, 2]] array([5, 3, 5, 7, 5]) # note: [2, 3, 2, 4, 2] is a Python list
We can even use fancy indexing and broadcasting at the same time:
>>> a = np.arange(12).reshape(3,4) >>> a array([[ 0, 1, 2, 3], [ 4, 5, 6, 7], [ 8, 9, 10, 11]]) >>> i = np.array([[0, 1], [1, 2]]) >>> a[i, 2] # same as a[i, 2*np.ones((2,2), dtype=int)] array([[ 2, 6], [ 6, 10]])
7,
5,
9, -10,
11, -10])
8,
5,
9, -10,
11, -10])
When a new array is created by indexing with an array of integers, the new array has the same shape than the array of integers:
>>> a = np.arange(10) >>> idx = np.array([[3, 4], [9, 7]]) >>> a[idx] array([[3, 4], [9, 7]]) >>> b = np.arange(10) >>> a = np.arange(12).reshape(3, 4) >>> a array([[ 0, 1, 2, 3], [ 4, 5, 6, 7], [ 8, 9, 10, 11]]) >>> i = np.array([0, 1, 1, 2]) >>> j = np.array([2, 1, 3, 3]) >>> a[i, j] array([ 2, 5, 7, 11]) >>> i = np.array([[0, 1], [1, 2]]) >>> j = np.array([[2, 1], [3, 3]]) >>> i array([[0, 1], [1, 2]]) >>> j array([[2, 1], [3, 3]]) >>> a[i, j] array([[ 2, 5], [ 7, 11]])
3.3. 2. Basics II
69
3.3. 2. Basics II
70
y, x = np.ogrid[0:512,0:512] # x and y indices of pixels y.shape, x.shape ((512, 1), (1, 512)) centerx, centery = (256, 256) # center of the image mask = ((y - centery)**2 + (x - centerx)**2) > 230**2 # circle
assign the value 0 to the pixels of the image corresponding to the mask. The syntax is extremely simple and intuitive:
In [19]: lena[mask] = 0 In [20]: plt.imshow(lena) Out[20]: <matplotlib.image.AxesImage object at 0xa36534c>
Follow-up: copy all instructions of this exercise in a script called lena_locket.py then execute this script in IPython with %run lena_locket.py. Change the circle to an ellipsoid.
Here are a few images we will be able to obtain with our manipulations: use different colormaps, crop the image, change some parts of the image.
and generate a new array containing its 2nd and 4th rows. 2. Divide each column of the array
>>> a = np.arange(25).reshape(5, 5)
elementwise with the array b = np.array([1., 5, 10, 15, 20]). (Hint: np.newaxis). Lets use the imshow function of pylab to display the image.
In [3]: import pylab as plt In [4]: lena = scipy.lena() In [5]: plt.imshow(lena)
3. Harder one: Generate a 10 x 3 array of random numbers (in range [0,1]). For each row, pick the number closest to 0.5. Use abs and argsort to nd the column j closest for each row. Use fancy indexing to extract the numbers. (Hint: a[i,j] the array i must contain the row numbers corresponding to stuff in j.)
Lena is then displayed in false colors. A colormap must be specied for her to be displayed in grey.
In [6]: plt.imshow(lena, plt.cm.gray) In [7]: # or, In [7]: plt.gray()
Create an array of the image with a narrower centering : for example, remove 30 pixels from all the borders of the image. To check the result, display this new array with imshow.
In [9]: crop_lena = lena[30:-30,30:-30]
We will now frame Lenas face with a black locket. For this, we need to create a mask corresponding to the pixels we want to be black. The mask is dened by this condition (y-256)**2 + (x-256)**2
3.3. 2. Basics II
71
3.3. 2. Basics II
72
80000 70000 60000 50000 40000 30000 20000 10000 0 1900 1905 1910 1915 1920
(Hints: use elementwise operations and broadcasting. You can make np.ogrid give a number of points in given range with np.ogrid[0:1:20j].) Reminder Python functions
def f(a, b, c): return some_result
1.5 1.0 0.5 0.0 0.5 1.0 1.52.0 1.5 1.0 0.5 0.0 0.5 1.0
Computes and print, based on the data in populations.txt... 1. The mean and std of the populations of each species for the years in the period. 2. Which year each species had the largest population. 3. Which species has the largest population for each year. np.array([H, L, C])) (Hint: argsort & fancy indexing of
4. Which years any of the populations is above 50000. (Hint: comparisons and np.any) 5. The top 2 years for each species when they had the lowest populations. (Hint: argsort, fancy indexing) 6. Compare (plot) the change in hare population (see help(np.gradient)) and the number of lynxes. Check correlation (see help(np.corrcoef)). ... all without for-loops.
Write a script that computes the Mandelbrot fractal. The Mandelbrot iteration:
(ab c)da db dc
Point (x, y) belongs to the Mandelbrot set if |c| < some_threshold. Do this computation by: 2. Do the iteration 1. Construct a grid of c = x + 1j*y values in range [-2, 1] x [-1.5, 1.5]
over this volume with the mean. The exact result is: ln 2 3.3. 2. Basics II
1 2
3.3. 2. Basics II
74
3. Form the 2-d boolean mask indicating which points are in the set 4. Save the result to an image with:
>>> >>> >>> >>> import matplotlib.pyplot as plt plt.imshow(mask.T, extent=[-2, 1, -1.5, 1.5]) plt.gray() plt.savefig(mandelbrot.png)
Obtain a subset of the elements of an array and/or modify their values with masks:
>>> a[a < 0] = 0
Know miscellaneous operations on arrays, such as nding the mean or max (array.max(), array.mean()). No need to retain everything, but have the reex to search in the documentation (online docs, help(), lookfor())!! For advanced use: master the indexing with arrays of integers, as well as broadcasting. Know more Numpy functions to handle various array operations.
3.4 3. Moving on
3.4.1 More data types Casting
Bigger type wins in mixed-type operations:
>>> np.array([1, 2, 3]) + 1.5 array([ 2.5, 3.5, 4.5])
Markov chain transition matrix P, and probability distribution on the states p: 1. 0 <= P[i,j] <= 1: probability to go from state i to state j 2. Transition rule: pnew = P T pold 3. all(sum(P, axis=1) == 1), p.sum() == 1: normalization Write a script that works with 5 states, and: Constructs a random matrix, and normalizes each row so that it is a transition matrix. Starts from a random (normalized) probability distribution p and takes 50 steps => p_50 Computes the stationary distribution: the eigenvector of P.T with eigenvalue 1 (numerically: closest to 1) => p_stationary Remember to normalize the eigenvector I didnt... Checks if p_50 and p_stationary are equal to tolerance 1e-5 Toolbox: np.random.rand, .dot(), np.linalg.eig, reductions, abs(), argmin, comparisons, all, np.linalg.norm, etc.
Forced casts:
>>> a = np.array([1.7, 1.2, 1.6]) >>> b = a.astype(int) # <-- truncates to integer >>> b array([1, 1, 1])
Rounding:
>>> a = np.array([1.7, 1.2, 1.6]) >>> b = np.around(a) >>> b # still floating-point array([ 2., 1., 2.]) >>> c = np.around(a).astype(int) >>> c array([2, 1, 2])
3.4. 3. Moving on
76
Unsigned integers: uint8 uint16 uint32 uint64 8 bits 16 bits 32 bits 64 bits
>>> np.iinfo(np.uint32).max, 2**32 - 1 (2147483647, 2147483647) >>> np.iinfo(np.uint64).max, 2**64 - 1 (9223372036854775807, 9223372036854775807L)
Floating-point numbers: float16 float32 float64 float96 float128 16 bits 32 bits 64 bits (same as float) 96 bits, platform-dependent (same as np.longdouble) 128 bits, platform-dependent (same as np.longdouble)
>>> np.finfo(np.float32).eps 1.1920929e-07 >>> np.finfo(np.float64).eps 2.2204460492503131e-16 >>> np.float32(1e-8) + np.float32(1) == 1 True >>> np.float64(1e-8) + np.float64(1) == 1 False
Complex oating-point numbers: complex64 complex128 complex192 complex256 two 32-bit oats two 64-bit oats two 96-bit oats, platform-dependent two 128-bit oats, platform-dependent
Smaller data types If you dont know you need special data types, then you probably dont. Comparison on using float32 instead of float64: Half the size in memory and on disk Half the memory bandwidth required (may be a bit faster in some operations)
In [1]: a = np.zeros((1e6,), dtype=np.float64) In [2]: b = np.zeros((1e6,), dtype=np.float32) In [3]: %timeit a*a 1000 loops, best of 3: 1.78 ms per loop In [4]: %timeit b*b 1000 loops, best of 3: 1.07 ms per loop
Note: There are a bunch of other syntaxes for constructing structured arrays, see here and here.
But: bigger rounding errors sometimes in surprising places (i.e., dont use them unless you really need them)
Ak =
m=0
am exp 2i
mk n
k = 0, . . . , n 1.
3.4. 3. Moving on
77
3.4. 3. Moving on
78
Full details of what for you can use such standard routines is beyond this tutorial. Neverheless, there they are, if you need them:
>>> a = np.exp(2j*np.pi*np.arange(10)) >>> fa = np.fft.fft(a) >>> np.set_printoptions(suppress=True) # print small number as 0 >>> fa array([ 10.-0.j, 0.+0.j, 0.+0.j, 0.+0.j, 0.+0.j, 0.+0.j, -0.+0.j, -0.+0.j, -0.+0.j, -0.+0.j]) >>> a = np.exp(2j*np.pi*np.arange(3)) >>> b = a[:,np.newaxis] + a[np.newaxis,:] >>> np.fft.fftn(b) array([[ 18.-0.j, 0.+0.j, -0.+0.j], [ 0.+0.j, 0.+0.j, 0.+0.j], [ -0.+0.j, 0.+0.j, 0.+0.j]])
80000 70000 60000 50000 40000 30000 20000 10000 0 1900 1905 1910 1915 1920
See help(np.fft) and help(np.fft.fft) for more. These functions in general take the axes argument, and you can additionally specify padding etc.
3.4. 3. Moving on
79
3.4. 3. Moving on
80
# Theres probably a period of around 10 years (obvious from the # plot), but for this crude a method, theres not enough data to say # much more.
0
Worked example: Gaussian image blur
Convolution: f1 (t) = dt K(t t )f0 (t )
50
f1 () = K()f0 ()
100
""" Simple image blur by convolution with a Gaussian kernel """ import numpy as np from numpy import newaxis import matplotlib.pyplot as plt # read image img = plt.imread(../../../data/elephant.png) # prepare an 1-D Gaussian convolution kernel t = np.linspace(-10, 10, 30) bump = np.exp(-0.1*t**2) bump /= np.trapz(bump) # normalize the integral to 1 # make a 2-D kernel out of it kernel = bump[:,newaxis] * bump[newaxis,:] # padded fourier transform, with the same shape as the image kernel_ft = np.fft.fft2(kernel, s=img.shape[:2], axes=(0, 1)) # convolve img_ft = np.fft.fft2(img, axes=(0, 1)) img2_ft = kernel_ft[:,:,newaxis] * img_ft img2 = np.fft.ifft2(img2_ft, axes=(0, 1)).real # clip values to range img2 = np.clip(img2, 0, 1) # plot output plt.imshow(img2) plt.show()
150
50
100
150
200
250
# # # # # #
Further exercise (only if you are familiar with this stuff): A "wrapped border" appears in the upper left and top edges of the image. This is because the padding is not done correctly, and does not take the kernel size into account (so the convolution "flows out of bounds of the image"). Try to remove this artifact.
3.4. 3. Moving on
81
3.4. 3. Moving on
82
Warning: Not all Numpy functions respect masks, for instance np.dot, so check the return types. The masked_array returns a view to the original array:
>>> mx[1] = 9 >>> x array([ 1, 9,
Example: Masked statistics Canadian rangers were distracted when counting hares and lynxes in 1903-1910 and 1917-1918, and got the numbers are wrong. (Carrot farmers stayed alert, though.) Compute the mean populations over time, ignoring the invalid numbers.
>>> data = np.loadtxt(../../data/populations.txt) >>> populations = np.ma.masked_array(data[:,1:]) >>> year = data[:,0] >>> bad_years = (((year >= 1903) & (year <= 1910)) ... | ((year >= 1917) & (year <= 1918))) >>> populations[bad_years,0] = np.ma.masked >>> populations[bad_years,1] = np.ma.masked >>> populations.mean(axis=0) masked_array(data = [40472.7272727 18627.2727273 42400.0], mask = [False False False], fill_value = 1e+20) >>> populations.std(axis=0) masked_array(data = [21087.656489 15625.7998142 3322.50622558], mask = [False False False], fill_value = 1e+20)
3, -99,
5])
The mask
You can modify the mask by assigning:
>>> mx[1] = np.ma.masked >>> mx masked_array(data = [1 -- 3 -- 5], mask = [False True False fill_value = 999999)
True False],
True False],
80000 70000 60000 50000 40000 30000 20000 10000 0 1900 1905 1910 1915 1920
The masked entries can be lled with a given value to get an usual array back:
>>> x2 = mx.filled(-1) >>> x2 array([ 1, 9, 3, -1,
5])
Domain-aware functions
The masked array package also contains domain-aware functions:
>>> np.ma.log(np.array([1, 2, -1, -2, 3, -5])) masked_array(data = [0.0 0.69314718056 -- -- 1.09861228867 --], mask = [False False True True False True], fill_value = 1e+20)
Note: Streamlined and more seamless support for dealing with missing data in arrays is making its way into Numpy 1.7. Stay tuned!
3.4. 3. Moving on
83
3.4. 3. Moving on
84
3.4.5 Polynomials
Numpy also contains polynomials in different bases: For example, 3x2 + 2x 1
>>> p = np.poly1d([3, 2, -1]) >>> p(0) -1 >>> p.roots array([-1. , 0.33333333]) >>> p.order 2 >>> x = np.linspace(0, 1, 20) >>> y = np.cos(x) + 0.3*np.random.rand(20) >>> p = np.poly1d(np.polyfit(x, y, 3)) >>> t = np.linspace(0, 1, 200) >>> plt.plot(x, y, o, t, p(t), -) >>> plt.show()
>>> p = np.polynomial.Polynomial([-1, 2, 3]) # coefs in different order! >>> p(0) -1.0 >>> p.roots() array([-1. , 0.33333333]) >>> p.order 2
Example using polynomials in Chebyshev basis, for polynomials in range [-1, 1]:
>>> x = np.linspace(-1, 1, 2000) >>> y = np.cos(x) + 0.3*np.random.rand(2000) >>> p = np.polynomial.Chebyshev.fit(x, y, 90) >>> >>> >>> >>> t = np.linspace(-1, 1, 200) plt.plot(x, y, r.) plt.plot(t, p(t), k-, lw=3) plt.show()
1.3 1.2 1.1 1.0 0.9 0.8 0.7 0.6 0.0 0.2 0.4 0.6 0.8 1.0
The Chebyshev polynomials have some advantages in interpolation. See http://docs.scipy.org/doc/numpy/reference/routines.polynomials.poly1d.html for more.
3.4. 3. Moving on
85
3.4. 3. Moving on
86
<read-only buffer for 0xa588ba8, size 4, offset 0 at 0xa55cd60> >>> y.base is x True >>> y.flags C_CONTIGUOUS : True F_CONTIGUOUS : True OWNDATA : False WRITEABLE : False ALIGNED : True UPDATEIFCOPY : False
The owndata and writeable ags indicate status of the memory block.
At which byte in x.data does the item x[1,2] begin? The answer (in Numpy) strides: the number of bytes to jump to nd the next element 1 stride per dimension
>>> x.strides (3, 1) >>> byte_offset = 3*1 + 1*2 >>> x.data[byte_offset] \x06 >>> x[1,2] 6
# to find x[1,2]
simple, exible
Need to jump 6 bytes to nd the next row Need to jump 2 bytes to nd the next column
>>> y = np.array(x, order=F) >>> y.strides (2, 6) >>> str(y.data) \x01\x00\x04\x00\x07\x00\x02\x00\x05\x00\x08\x00\x03\x00\x06\x00\t\x00
Need to jump 2 bytes to nd the next row 87 3.5. 4. Under the hood 88
Need to jump 6 bytes to nd the next column Similarly to higher dimensions: C: last dimensions vary fastest (= smaller strides) F: rst dimensions vary fastest shape = (d1 , d2 , ..., dn ) strides = (s1 , s2 , ..., sn ) sC = dj+1 dj+2 ...dn itemsize j sF = d1 d2 ...dj1 itemsize j
Here, there is no way to represent the array c given one stride and the block of memory for a. Therefore, the reshape operation needs to make a copy here.
3.5.4 Summary
Numpy array: block of memory + indexing scheme + data type description Indexing: strides byte_position = np.sum(arr.strides * indices) Various tricks can you do by playing with the strides (stuff for an advanced tutorial it is)
Slicing
Everything can be represented by changing only shape, strides, and possibly adjusting the data pointer! Never makes copies of the data
>>> x = np.array([1, 2, 3, 4, 5, 6], dtype=np.int32) >>> y = x[::-1] >>> y array([6, 5, 4, 3, 2, 1]) >>> y.strides (-4,) >>> y = x[2:] >>> y.__array_interface__[data][0] - x.__array_interface__[data][0] 8 >>> x = np.zeros((10, 10, 10), dtype=np.float) >>> x.strides (800, 80, 8) >>> x[::2,::3,::4].strides (1600, 240, 32)
Reshaping
But: not all reshaping operations can be represented by playing with strides.
>>> >>> >>> (1, a = np.arange(6, dtype=np.int8).reshape(3, 2) b = a.T b.strides 2)
89
90
Numpys and Scipys documentation is enriched and updated on a regular basis by users on a wiki http://docs.scipy.org/numpy/. As a result, some docstrings are clearer or more detailed on the wiki, and you may want to read directly the documentation on the wiki instead of the ofcial documentation website. Note that anyone can create an account on the wiki and write better documentation; this is an easy way to contribute to an open-source project and improve the tools you are using!
author Emmanuelle Gouillart Rather than knowing all functions in Numpy and Scipy, it is important to nd rapidly information throughout the documentation and the available help. Here are some ways to get information: In Ipython, help function opens the docstring of the function. Only type the beginning of the functions name and use tab completion to display the matching functions.
In [204]: help np.v np.vander np.vdot np.var np.vectorize In [204]: help np.vander np.version np.void np.void0 np.vsplit np.vstack
Scipys cookbook http://www.scipy.org/Cookbook gives recipes on many common problems frequently encountered, such as tting data points, solving ODE, etc. Matplotlibs website http://matplotlib.sourceforge.net/ features a very nice gallery with a large number of plots, each of them shows both the source code and the resulting plot. This is very useful for learning by example. More standard documentation is also available.
In Ipython it is not possible to open a separated window for help and documentation; however one can always open a second Ipython shell just to display help and docstrings... Numpys and Scipys documentations can be browsed online on http://docs.scipy.org/doc. The search button is quite useful inside the reference documentation of the two packages (http://docs.scipy.org/doc/numpy/reference/ and http://docs.scipy.org/doc/scipy/reference/). Tutorials on various topics as well as the complete API with all docstrings are found on this website.
Mayavis website http://code.enthought.com/projects/mayavi/docs/development/html/mayavi/ also has a very nice gallery of examples http://code.enthought.com/projects/mayavi/docs/development/html/mayavi/auto/examples.html in which one can browse for different visualization solutions.
91
92
SciPy Users List ([email protected]): scientic computing with Python, high-level data processing, in particular with the scipy package. [email protected] for plotting with matplotlib.
Finally, two more technical possibilities are useful as well: In Ipython, the magical function %psearch search for objects matching patterns. This is useful if, for example, one does not know the exact name of a function.
In [3]: import numpy as np In [4]: %psearch np.diag* np.diag np.diagflat np.diagonal
If everything listed above fails (and Google doesnt have the answer)... dont despair! Write to the mailinglist suited to your problem: you should have a quick answer if you describe your problem well. Experts on scientic python often give very enlightening explanations on the mailing-list. Numpy discussion ([email protected]): all about numpy arrays, manipulating them, indexation questions, etc.
93
94
Welcome to pylab, a matplotlib-based Python environment. For more information, type help(pylab). In [1]:
Matplotlib
In [2]:
5.1 Introduction
matplotlib is probably the single most used Python package for 2D-graphics. It provides both a very quick way to visualize data from Python and publication-quality gures in many formats. We are going to explore matplotlib in interactive mode covering most common cases. We also look at the class library which is provided with an object-oriented interface.
5.2 IPython
IPython is an enhanced interactive Python shell that has lots of interesting features including named inputs and outputs, access to shell commands, improved debugging and many more. When we start it with the command line argument -pylab, it allows interactive matplotlib sessions that has Matlab/Mathematica-like functionality.
5.3 pylab
pylab provides a procedural interface to the matplotlib object-oriented plotting library. It is modeled closely after Matlab(TM). Therefore, the majority of plotting commands in pylab has Matlab(TM) analogs with similar arguments. Important commands are explained with interactive examples.
To apply the new properties we need to redraw the screen: 95 5.4. Simple Plots 96
In [10]: draw()
1) as keyword arguments at creation time: plot(x, linear, g:+, x, square, r--o). 2. with the function setp: setp(line, color=g). 3. using the set_something methods: line.set_marker(o) Lines have several properties as shown in the following table: Property alpha antialiased color data_clipping label linestyle linewidth marker markeredgewidth markeredgecolor markerfacecolor markersize Symbol --. : . , o ^ v < > s + x D d 1 2 3 4 h H p | _ steps Value alpha transparency on 0-1 scale True or False - use antialised rendering matplotlib color arg whether to use numeric to clip data string optionally used for legend one of - : -. oat, the line width in points one of + , o . s v x > <, etc line width around the marker symbol edge color if a marker is used face color if a marker is used size of the marker in points
This does not look particularly nice. We would rather like to have it at the left. So we clean the old graph:
In [6]: clf()
and print it anew providing new line styles (a green dotted line with crosses for the linear and a red dashed line with circles for the square graph):
In [7]: lines = plot(x, linear, g:+, x, square, r--o)
There are many line styles that can be specied with symbols: Description solid line dashed line dash-dot line dotted line points pixels circle symbols triangle up symbols triangle down symbols triangle left symbols triangle right symbols square symbols plus symbols cross symbols diamond symbols thin diamond symbols tripod down symbols tripod up symbols tripod left symbols tripod right symbols hexagon symbols rotated hexagon symbols pentagon symbols vertical line symbols horizontal line symbols use gnuplot style steps # kwarg only
5.4.1 Exercises
1. Plot a simple graph of a sinus function in the range 0 to 3 with a step size of 0.01. 2. Make the line red. Add diamond-shaped markers with size of 5. 3. Add a legend and a grid to the plot.
Colors can be given in many ways: one-letter abbreviations, gray scale intensity from 0 to 1, RGB in hex and tuple format as well as any legal html color name. The one-letter abbreviations are very handy for quick work. With following you can get quite a few things done:
5.5 Properties
So far we have used properties for the lines. There are three possibilities to set them:
5.5. Properties
97
5.5. Properties
98
Python Scientic lecture notes, Release 2011 Abbreviation b g r c m y k w Color blue green red cyan magenta yellow black white Value alpha transparency on 0-1 scale matplotlib color arg set the font family, eg sans-serif, cursive, fantasy the font slant, one of normal, italic, oblique left, right or center left, right or center only for multiline strings font name, eg, Sans, Courier, Helvetica x,y location font variant, eg normal, small-caps angle in degrees for rotated text fontsize in points, eg, 8, 10, 12 font style, one of normal, italic, oblique set the text string itself top, bottom or center font weight, e.g. normal, bold, heavy, light
Other objects also have properties. The following table list the text properties: Property alpha color family fontangle horizontalalignment multialignment name position variant rotation size style text verticalalignment weight
matplotlib supports TeX mathematical expression. So r$\pi$ will show up as: If you want to get more control over where the text goes, you use annotations:
In [4]: ax = gca() In [5]: ax.annotate(Here is something special, xy = (1, 1))
We will write the text at the position (1, 1) in terms of data. There are many optional arguments that help to customize the position of the text. The arguments textcoords and xycoords species what x and y mean: argument gure points gure pixels gure fraction axes points axes pixels axes fraction data coordinate system points from the lower left corner of the gure pixels from the lower left corner of the gure 0,0 is lower left of gure and 1,1 is upper, right points from lower left corner of axes pixels from lower left corner of axes 0,1 is lower left of axes and 1,1 is upper right use the axes data coordinate system
5.5.1 Exercise
1. Apply different line styles to a plot. Change line color and thickness as well as the size and the kind of the marker. Experiment with different styles.
5.6 Text
Weve already used some commands to add text to our gure: xlabel ylabel, and title. There are two functions to put text at a dened position. text adds the text with data coordinates:
In [2]: plot(arange(10)) In [3]: t1 = text(5, 5, Text in the middle)
If we do not supply xycoords, the text will be written at xy. Furthermore, we can use an arrow whose appearance can also be described in detail:
In [14]: plot(arange(10)) Out[14]: [<matplotlib.lines.Line2D instance at 0x01BB15D0>] In [15]: ax = gca() In [16]: ax.annotate(Here is something special, xy = (2, 1), xytext=(1,5)) Out[16]: <matplotlib.text.Annotation instance at 0x01BB1648> In [17]: ax.annotate(Here is something special, xy = (2, 1), xytext=(1,5), ....: arrowprops={facecolor: r})
5.6.1 Exercise
1. Annotate a line at two places with text. Use green and red arrows and align it according to figure points and data.
5.6. Text
99
5.6. Text
100
5.7 Ticks
5.7.1 Where and What
Well formatted ticks are an important part of publishing-ready gures. matplotlib provides a totally congurable system for ticks. There are tick locators to specify where ticks should appear and tick formatters to make ticks look like the way you want. Major and minor ticks can be located and formatted independently from each other. Per default minor ticks are not shown, i.e. there is only an empty list for them because it is as NullLocator (see below).
In [5]: major_locator = MultipleLocator(2) In [6]: major_formatter = FormatStrFormatter(%5.2f) In [7]: minor_locator = MultipleLocator(1) In [8]: ax.xaxis.set_major_locator(major_locator) In [9]: ax.xaxis.set_minor_locator(minor_locator) In [10]: ax.xaxis.set_major_formatter(major_formatter) In [10]: draw()
After we redraw the gure our x axis should look like this:
5.7.4 Exercises
1. Plot a graph with dates for one year with daily values at the x axis using the built-in module datetime. 2. Format the dates in such a way that only the rst day of the month is shown. 3. Display the dates with and without the year. Show the month as number and as rst three letters of the month name.
All of these locators derive from the base class matplotlib.ticker.Locator. You can make your own locator deriving from it. Handling dates as ticks can be especially tricky. matplotlib.dates: Class MinuteLocator HourLocator DayLocator WeekdayLocator MonthLocator YearLocator RRuleLocator Therefore, matplotlib provides special locators in
Description locate minutes locate hours locate specied days of the month locate days of the week, e.g. MO, TU locate months, e.g. 10 for October locate years that are multiples of base locate using a matplotlib.dates.rrule
5.8.2 Figures
A figure is the windows in the GUI that has Figure # as title. Figures are numbered starting from 1 as opposed to the normal Python way starting from 0. This is clearly MATLAB-style. There are several parameters that determine how the gure looks like: Argument num figsize dpi facecolor edgecolor frameon Default 1 figure.figsize figure.dpi figure.facecolor figure.edgecolor True Description number of gure gure size in in inches (width, height) resolution in dots per inch color of the drawing background color of edge around the drawing background draw gure frame or not
All of these formatters derive from the base class matplotlib.ticker.Formatter. You can make your own formatter deriving from it. Now we set our major locator to 2 and the minor locator to 1. We also format the numbers as decimals using the FormatStrFormatter:
The defaults can be specied in the resource le and will be used most of the time. Only the number of the gure is frequently changed.
5.7. Ticks
101
102
When you work with the GUI you can close a gure by clicking on the x in the upper right corner. But you can close a gure programmatically by calling close. Depending on the argument it closes (1) the current gure (no argument), (2) a specic gure (gure number or gure instance as argument), or (3) all gures (all as argument). As with other objects, you can set gure properties also setp or with the set_something methods.
In [24]: plot(x)
5.8.3 Subplots
With subplot you can arrange plots in regular grid. You need to specify the number of rows and columns and the number of the plot. A plot with two rows and one column is created with subplot(211) and subplot(212). The result looks like this:
5.8.5 Exercises
1. Draw two gures, one 5 by 5, one 10 by 10 inches. 2. Add four subplots to one gure. Add labels and ticks only to the outermost axes. 3. Place a small plot in one bigger plot.
If you want two plots side by side, you create one row and two columns with subplot(121) and subplot(112). The result looks like this:
Frequently, you dont want all subplots to have ticks or labels. You can set the xticklabels or the yticklabels to an empty list ([]). Every subplot denes the methods is_first_row, is_first_col, is_last_row, is_last_col. These can help to set ticks and labels only for the outer pots.
The default column width is 0.8. It can be changed with common methods by setting width. As it can be color and bottom, we can also set an error bar with yerr or xerr.
5.8.4 Axes
Axes are very similar to subplots but allow placement of plots at any location in the gure. So if we want to put a smaller plot inside a bigger one we do so with axes:
In [22]: plot(x) Out[22]: [<matplotlib.lines.Line2D instance at 0x02C9CE90>] In [23]: a = axes([0.2, 0.5, 0.25, 0.25])
103
104
We get:
The range of the whiskers can be determined with the argument whis, which defaults to 1.5. The range of the whiskers is between the most extreme data point within whis*(75%-25%) of the data.
z = ones((10, z[5,5] = 7 z[2,1] = 3 z[8,7] = 4 z array([[ 1., [ 1., [ 1., [ 1., [ 1., [ 1., [ 1., [ 1., [ 1., [ 1.,
10))
1., 1., 3., 1., 1., 1., 1., 1., 1., 1.,
1., 1., 1., 1., 1., 1., 1., 1., 1., 1.,
1., 1., 1., 1., 1., 1., 1., 1., 1., 1.,
1., 1., 1., 1., 1., 1., 1., 1., 1., 1.,
1., 1., 1., 1., 1., 7., 1., 1., 1., 1.,
1., 1., 1., 1., 1., 1., 1., 1., 1., 1.,
1., 1., 1., 1., 1., 1., 1., 1., 4., 1.,
1., 1., 1., 1., 1., 1., 1., 1., 1., 1.,
1.], 1.], 1.], 1.], 1.], 1.], 1.], 1.], 1.], 1.]])
We can also ll the area. We just use numbers form 0 to 9 for the values v:
v = x contourf(x, x, z, v)
We want to have the whiskers well within the plot and therefore increase the y axis:
ax = gca() ax.set_ylim(0, 12) draw()
5.9.7 Histograms
We can make histograms. Lets get some normally distributed random numbers from numpy:
105
106
If we want only one axis with a logarithmic scale we can use semilogx or semilogy.
All arrows point to the upper right, except two. The one at the location (4, 4) has 3 units in x-direction and the other at location (1, 1) has -1 unit in y direction:
107
108
0, 0, 0, 1, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 1, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 1, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 1, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 1, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 1, 0,
0], 0], 0], 0], 0], 0], 0], 0], 0], 1]])
import datetime d1 = datetime.datetime(2000, 1, 1) delta = datetime.timedelta(15) dates = [d1 + x * delta for x in range(1 dates [datetime.datetime(2000, 1, 1, 0, 0), datetime.datetime(2000, 1, 16, 0, 0), datetime.datetime(2000, 1, 31, 0, 0), datetime.datetime(2000, 2, 15, 0, 0), datetime.datetime(2000, 3, 1, 0, 0), datetime.datetime(2000, 3, 16, 0, 0), datetime.datetime(2000, 3, 31, 0, 0), datetime.datetime(2000, 4, 15, 0, 0), datetime.datetime(2000, 4, 30, 0, 0), datetime.datetime(2000, 5, 15, 0, 0)]
109
110
Comparing this with the arguments of figure in pylab shows signicant overlap:
num=None, figsize=None, dpi=None, facecolor=None edgecolor=None, frameon=True
import pylab pylab_fig = pylab.figure(1, figsize=figsize) figManager = _pylab_helpers.Gcf.get_active() figManager.canvas.figure = fig pylab.show()
Figure provides lots of methods, many of them have equivalents in pylab. The methods add_axes and add_subplot are called if new axes or subplot are created with axes or subplot in pylab. Also the method gca maps directly to pylab as do legend, text and many others. There are also several set_something method such as set_facecolor or set_edgecolor that will be called through pylab to set properties of the gure. Figure also implements get_something methods such as get_axes or get_facecolor to get properties of the gure.
Since we are not in the interactive pylab-mode, we need to import the class Figure explicitly (#1). We set the size of our gure to be 8 by 5 inches (#2). Now we initialize a new gure (#3) and add a subplot to the gure (#4). The 111 says one plot at position 1, 1 just as in MATLAB. We create a new plot with the numbers from 0 to 9 and at the same time get a reference to our line (#5). We can add several things to our plot. So we set a title and labels for the x and y axis (#6). We also want to see the grid (#7) and would like to have little lled circles as markers (#8). There are many different backends for rendering our gure. We use the Anti-Grain Geometry toolkit (http://www.antigrain.com) to render our gure. First, we import the backend (#9), then we create a new canvas that renders our gure (#10). We save our gure in a png-le with a resolution of 80 dpi (#11). We can use several GUI toolkits directly. So we import Tkinter (#12) as well as the corresponding backend (#13). Now we have to do some basic GUI programming work. We make a root object for our GUI (#14) and feed it together with our gure to the backend to get our canvas (15). We call the show method (#16), pack our widget (#17), and call the Tkinter mainloop to start the application (#18). You should see GUI window with the gure on your screen. After closing the screen, the next part, the script, will be executed. We would like to create a screen display just as we would use pylab. Therefore we import a helper (#19) and pylab itself (#20). We create a normal gure with pylab (21) and get the corresponding gure manager (#22). Now lets set our gure we created above to be the current gure (#23) and let pylab show the result (#24). The lower part of the gure might be cover by the toolbar. If so, please adjust the figsize for pylab accordingly.
5.10.4 Example
Lets look at an example for using the object-oriented API:
#file matplotlib/oo.py from matplotlib.figure import Figure figsize = (8, 5) fig = Figure(figsize=figsize) ax = fig.add_subplot(111) line = ax.plot(range(10))[0] ax.set_title(Plotted with OO interface) ax.set_xlabel(measured) ax.set_ylabel(calculated) ax.grid(True) line.set_marker(o) #1 #2 #3 #4 #5 #6
5.10.5 Exercises
1. Use the object-oriented API of matplotlib to create a png-le with a plot of two lines, one linear and square with a legend in it.
#7 #8
from matplotlib.backends.backend_agg import FigureCanvasAgg #9 canvas = FigureCanvasAgg(fig) #10 canvas.print_figure("oo.png", dpi=80) #11
import Tkinter as Tk #12 from matplotlib.backends.backend_tkagg import FigureCanvasTkAgg #13 root = Tk.Tk() #14 canvas2 = FigureCanvasTkAgg(fig, master=root) #15 canvas2.show() #16 canvas2.get_tk_widget().pack(side=Tk.TOP, fill=Tk.BOTH, expand=1) #17 Tk.mainloop() #18 from matplotlib import _pylab_helpers #19
111
112
cluster fftpack integrate interpolate io linalg maxentropy ndimage odr optimize signal sparse spatial special stats
Vector quantization / Kmeans Fourier transform Integration routines Interpolation Data input and output Linear algebra routines Routines for tting maximum entropy models n-dimensional image package Orthogonal distance regression Optimization Signal processing Sparse matrices Spatial data structures and algorithms Any special mathematical functions Statistics
authors Adrien Chauve, Andre Espaze, Emmanuelle Gouillart, Gal Varoquaux Scipy The scipy package contains various toolboxes dedicated to common issues in scientic computing. Its different submodules correspond to different applications, such as interpolation, integration, optimization, image processing, statistics, special functions, etc. scipy can be compared to other standard scientic-computing libraries, such as the GSL (GNU Scientic Library for C and C++), or Matlabs toolboxes. scipy is the core package for scientic routines in Python; it is meant to operate efciently on numpy arrays, so that numpy and scipy work hand in hand. Before implementing a routine, if is worth checking if the desired data processing is not already implemented in Scipy. As non-professional programmers, scientists often tend to re-invent the wheel, which leads to buggy, non-optimal, difcult-to-share and unmaintainable code. By contrast, Scipys routines are optimized and tested, and should therefore be used when possible. Warning: This tutorial is far from an introduction to numerical computing. As enumerating the different submodules and functions in scipy would be very boring, we concentrate instead on a few examples to give a general idea of how to use scipy for scientic computing. To begin with
>>> import numpy as np >>> import scipy
If you would like to know the objects used from Numpy, have a look at the scipy.__file__[:-1] le. On version 0.6.0, the whole Numpy namespace is imported by the line from numpy import *.
113
114
8 1.0 6 4 2 0 2 40 1 2 3 4 5 1.50 1 2 3 4 5
Notice how on the side of the window the resampling is less accurate and has a rippling effect. Signal has many window function: hamming, bartlett, blackman... Signal has ltering (Gaussian, median lter, Wiener), but we will discuss this in the image paragraph.
115
116
we can do a maximum-likelihood t of the observations to estimate the parameters of the underlying distribution. Here we t a normal process to the observed data:
>>> loc, std = stats.norm.fit(a) >>> loc 0.003738964114102075 >>> std 0.97450996668871193
6.5.2 Percentiles
The median is the value with half of the observations below, and half above:
>>> np.median(a) 0.0071645570292782519
It is also called the percentile 50, because 50% of the observation are below it:
>>> stats.scoreatpercentile(a, 50) 0.0071645570292782519
0.45 0.40 0.35 0.30 0.25 0.20 0.15 0.10 0.05 0.00 6 4 2 0 2 4 6
The resulting output is composed of: The T statistic value: it is a number the sign of which is proportional to the difference between the two random processes and the magnitude is related to the signicance of this difference. the p value: the probability of both process being identical. If it is close to 1, the two process are almost certainly identical. The closer it is to zero, the more likely it is that the processes have different mean.
If we know that the random process belongs to a given family of random processes, such as normal processes, 6.5. Statistics and random numbers: scipy.stats 117
118
0.0 >>> linalg.det(np.ones((3, 4))) Traceback (most recent call last): ... ValueError: expected square matrix
Others integration schemes are available with fixed_quad, quadrature, romberg. scipy.integrate also features routines for Ordinary differential equations (ODE) integration. In particular, scipy.integrate.odeint is a general-purpose integrator using LSODA (Livermore solver for ordinary differential equations with automatic method switching for stiff and non-stiff problems), see the ODEPACK Fortran library for more details. odeint solves rst-order ODE systems of the form:
dy/dt = rhs(y1, y2, .., t0,...)
Note that in case you use the matrix type, the inverse is computed when requesting the I attribute:
>>> ma = np.matrix(arr, copy=False) >>> np.allclose(ma.I, iarr) True
As an introduction, let us solve the ODE dy/dt = -2y between t = 0..4, with the initial condition y(t=0) = 1. First the function computing the derivative of the position needs to be dened:
>>> def calc_derivative(ypos, time, counter_arr): ... counter_arr += 1 ... return -2*ypos ...
Finally computing the inverse of a singular matrix (its determinant is zero) will raise LinAlgError:
>>> arr = np.array([[3, 2], ... [6, 4]]) >>> linalg.inv(arr) Traceback (most recent call last): ... LinAlgError: singular matrix
An extra argument counter_arr has been added to illustrate that the function may be called several times for a single time step, until solver convergence. The counter array is dened as:
>>> counter = np.zeros((1,), np.uint16)
Thus the derivative function has been called more than 40 times:
>>> counter array([129], dtype=uint16)
and the cumulative iterations number for the 10 rst convergences can be obtained by:
>>> info[nfe][:10] array([31, 35, 43, 49, 53, 57, 59, 63, 65, 69], dtype=int32)
For the recomposition, an alias for manipulating matrix will rst be dened:
>>> asmat = np.asmatrix
The solver requires more iterations at start. The nal trajectory is seen on the Matplotlib gure:
"""Solve the ODE dy/dt = -2y between t = 0..4, with the initial condition y(t=0) = 1. """ import numpy as np from scipy.integrate import odeint import pylab as pl def calc_derivative(ypos, time): return -2*ypos time_vec = np.linspace(0, 4, 40) yvec = odeint(calc_derivative, 1, time_vec) pl.plot(time_vec, yvec) pl.xlabel(Time [s]) pl.ylabel(y position [m])
SVD is commonly used in statistics or signal processing. Many other standard decompositions (QR, LU, Cholesky, Schur), as well as solvers for linear systems, are available in scipy.linalg.
119
120
from scipy.integrate import odeint import pylab as pl mass = 0.5 kspring = 4 cviscous = 0.4 nu_coef = cviscous / mass om_coef = kspring / mass def calc_deri(yvec, time, nuc, omc): return (yvec[1], -nuc * yvec[1] - omc * yvec[0]) time_vec = np.linspace(0, 10, 100) yarr = odeint(calc_deri, (1, 0), time_vec, args=(nu_coef, om_coef)) pl.plot(time_vec, yarr[:, 0], label=y) pl.plot(time_vec, yarr[:, 1], label="y") pl.legend()
y position [m]
1.5
0.0 0.0 0.5 1.0 1.5 2.0 Time [s] 2.5 3.0 3.5 4.0
y y'
Another example with odeint will be a damped spring-mass oscillator (2nd order oscillator). The position of a mass attached to a spring obeys the 2nd order ODE y + 2 eps wo y + wo^2 y = 0 with wo^2 = k/m being k the spring constant, m the mass and eps=c/(2 m wo) with c the damping coefcient. For a computing example, the parameters will be:
>>> mass = 0.5 # kg >>> kspring = 4 # N/m >>> cviscous = 0.4 # N s/m
For the odeint solver the 2nd order equation needs to be transformed in a system of two rst-order equations for the vector Y=(y, y). It will be convenient to dene nu = 2 eps wo = c / m and om = wo^2 = k/m:
>>> nu_coef = cviscous/mass >>> om_coef = kspring/mass
10
Thus the function will calculate the velocity and acceleration by:
>>> def calc_deri(yvec, time, nuc, omc): ... return (yvec[1], -nuc * yvec[1] - omc * yvec[0]) ... >>> time_vec = np.linspace(0, 10, 100) >>> yarr = odeint(calc_deri, (1, 0), time_vec, args=(nu_coef, om_coef))
There is no Partial Differential Equations (PDE) solver in scipy. Some PDE packages are written in Python, such as py or SfePy.
The nal position and velocity are shown on the following Matplotlib gure:
"""Damped spring-mass oscillator """ import numpy as np
121
122
>>> time_vec = np.arange(0, 20, time_step) >>> sig = np.sin(2 * np.pi / period * time_vec) + \ ... np.cos(10 * np.pi * time_vec)
100
However the observer does not know the signal frequency, only the sampling time step of the signal sig. But the signal is supposed to come from a real function so the Fourier transform will be symmetric. The fftfreq function will generate the sampling frequencies and fft will compute the fast Fourier transform:
>>> from scipy import fftpack >>> sample_freq = fftpack.fftfreq(sig.size, d=time_step) >>> sig_fft = fftpack.fft(sig)
Peak frequency
80 60 plower 40 20 00 0.05 0.10 0.15 0.20 0.25 0.30 0.35 0.40 0.45
Nevertheless only the positive part will be used for nding the frequency because the resulting power is symmetric:
>>> pidxs = np.where(sample_freq > 0) >>> freqs = sample_freq[pidxs] >>> power = np.abs(sig_fft)[pidxs] import numpy as np from scipy import fftpack import pylab as pl time_step = 0.1 period = 5. time_vec = np.arange(0, 20, time_step) sig = np.sin(2 * np.pi / period * time_vec) + np.cos(10 * np.pi * time_vec) sample_freq = fftpack.fftfreq(sig.size, d=time_step) sig_fft = fftpack.fft(sig) pidxs = np.where(sample_freq > 0) freqs, power = sample_freq[pidxs], np.abs(sig_fft)[pidxs] freq = freqs[power.argmax()] pl.figure() pl.plot(freqs, power) pl.ylabel(plower) pl.xlabel(Frequency [Hz]) axes = pl.axes([0.3, 0.3, 0.5, 0.5]) pl.title(Peak frequency) pl.plot(freqs[:8], power[:8]) pl.setp(axes, yticks=[])
2 3 Frequency [Hz]
Now only the main signal component will be extracted from the Fourier transform:
>>> sig_fft[np.abs(sample_freq) > freq] = 0
123
124
main_sig = fftpack.ifft(sig_fft) pl.figure() pl.plot(time_vec, sig) pl.plot(time_vec, main_sig, linewidth=3) pl.ylabel(Amplitude) pl.xlabel(Time [s])
2.0 1.5 1.0 0.5 Amplitude 0.0 0.5 1.0 1.5 2.00 5 10 Time [s] 15 20
1.0
0.5
0.0
0.5
1.0 0.0
0.2
0.4
0.6
0.8
1.0
A cubic interpolation can also be selected by providing the kind optional keyword argument:
scipy.interpolate.interp2d is similar to interp1d, but for 2-D arrays. Note that for the interp family, the computed time must stay within the measured time range. See the summary exercise on Maximum wind speed prediction at the Sprog station (page 133) for a more advance spline interpolation example.
125
126
This approach take 20 ms on our computer. This simple algorithm becomes very slow as the size of the grid grows, so you should use optimize.brent instead for scalar functions:
>>> optimize.brent(f) -1.3064400120612139
To nd the local minimum, lets add some constraints on the variable using optimize.fminbound:
>>> # search the minimum only between 0 and 10 >>> optimize.fminbound(f, 0, 10) array([ 3.83746712])
You can nd algorithms with the same functionalities for multi-dimensional problems in scipy.optimize. See the summary exercise on Non linear least squares curve tting: application to point extraction in topographical lidar data (page 138) for a more advanced example.
Image processing routines may be sorted according to the category of processing they perform.
This resolution takes 4.11ms on our computer. The problem with this approach is that, if the function has local minima (is not convex), the algorithm may nd these local minima instead of the global minimum depending on the initial point. If we dont know the neighborhood of the global minima to choose the initial point, we need to resort to costlier global optimization.
127
128
In [35]: subplot(151) Out[35]: <matplotlib.axes.AxesSubplot object at 0x925f46c> In [36]: imshow(shifted_lena, cmap=cm.gray) Out[36]: <matplotlib.image.AxesImage object at 0x9593f6c> In [37]: axis(off) Out[37]: (-0.5, 511.5, 511.5, -0.5) In [39]: # etc.
Elementary mathematical-morphology operations use a structuring element in order to modify other geometrical structures. Let us rst generate a structuring element
>>> el = ndimage.generate_binary_structure(2, 1) >>> el array([[False, True, False], [ True, True, True], [False, True, False]], dtype=bool) >>> el.astype(np.int) array([[0, 1, 0], [1, 1, 1], [0, 1, 0]])
Erosion
>>> a = np.zeros((7,7), dtype=np.int) >>> a[1:6, 2:5] = 1 >>> a array([[0, 0, 0, 0, 0, 0, 0], [0, 0, 1, 1, 1, 0, 0], [0, 0, 1, 1, 1, 0, 0], [0, 0, 1, 1, 1, 0, 0], [0, 0, 1, 1, 1, 0, 0], [0, 0, 1, 1, 1, 0, 0], [0, 0, 0, 0, 0, 0, 0]]) >>> ndimage.binary_erosion(a).astype(a.dtype) array([[0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 1, 0, 0, 0], [0, 0, 0, 1, 0, 0, 0], [0, 0, 0, 1, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0]]) >>> #Erosion removes objects smaller than the structure >>> ndimage.binary_erosion(a, structure=np.ones((5,5))).astype(a.dtype) array([[0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0]])
And many other lters in scipy.ndimage.filters and scipy.signal can be applied to images Exercise Compare histograms for the different ltered images.
Dilation
>>> a = np.zeros((5, 5)) >>> a[2, 2] = 1 >>> a array([[ 0., 0., 0., 0.,
0.],
129
130
[ 0., 0., 0., 0., 0.], [ 0., 0., 1., 0., 0.], [ 0., 0., 0., 0., 0.], [ 0., 0., 0., 0., 0.]]) >>> ndimage.binary_dilation(a).astype(a.dtype) array([[ 0., 0., 0., 0., 0.], [ 0., 0., 1., 0., 0.], [ 0., 1., 1., 1., 0.], [ 0., 0., 1., 0., 0.], [ 0., 0., 0., 0., 0.]])
Exercise Check that the area of the reconstructed square is smaller than the area of the initial square. (The opposite would occur if the closing step was performed before the opening). For gray-valued images, eroding (resp. dilating) amounts to replacing a pixel by the minimal (resp. maximal) value among pixels covered by the structuring element centered on the pixel of interest.
>>> a = np.zeros((7,7), dtype=np.int) >>> a[1:6, 1:6] = 3 >>> a[4,4] = 2; a[2,3] = 1 >>> a array([[0, 0, 0, 0, 0, 0, 0], [0, 3, 3, 3, 3, 3, 0], [0, 3, 3, 1, 3, 3, 0], [0, 3, 3, 3, 3, 3, 0], [0, 3, 3, 3, 2, 3, 0], [0, 3, 3, 3, 3, 3, 0], [0, 0, 0, 0, 0, 0, 0]]) >>> ndimage.grey_erosion(a, size=(3,3)) array([[0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0], [0, 0, 1, 1, 1, 0, 0], [0, 0, 1, 1, 1, 0, 0], [0, 0, 3, 2, 2, 0, 0], [0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0]])
Opening
>>> a = np.zeros((5,5), dtype=np.int) >>> a[1:4, 1:4] = 1; a[4, 4] = 1 >>> a array([[0, 0, 0, 0, 0], [0, 1, 1, 1, 0], [0, 1, 1, 1, 0], [0, 1, 1, 1, 0], [0, 0, 0, 0, 1]]) >>> # Opening removes small objects >>> ndimage.binary_opening(a, structure=np.ones((3,3))).astype(np.int) array([[0, 0, 0, 0, 0], [0, 1, 1, 1, 0], [0, 1, 1, 1, 0], [0, 1, 1, 1, 0], [0, 0, 0, 0, 0]]) >>> # Opening can also smooth corners >>> ndimage.binary_opening(a).astype(np.int) array([[0, 0, 0, 0, 0], [0, 0, 1, 0, 0], [0, 1, 1, 1, 0], [0, 0, 1, 0, 0], [0, 0, 0, 0, 0]])
Closing: ndimage.binary_closing Exercise Check that opening amounts to eroding, then dilating. An opening operation removes small structures, while a closing operation lls small holes. Such operation can therefore be used to clean an image.
>>> >>> >>> >>> >>> >>> a = np.zeros((50, 50)) a[10:-10, 10:-10] = 1 a += 0.25*np.random.standard_normal(a.shape) mask = a>=0.5 opened_mask = ndimage.binary_opening(mask) closed_mask = ndimage.binary_closing(opened_mask)
Now we look for various information about the objects in the image:
>>> labels, nb = ndimage.label(mask) >>> nb 8 >>> areas = ndimage.sum(mask, labels, xrange(1, labels.max()+1)) >>> areas [190.0, 45.0, 424.0, 278.0, 459.0, 190.0, 549.0, 424.0] >>> maxima = ndimage.maximum(sig, labels, xrange(1, labels.max()+1)) >>> maxima [1.8023823799830032, 1.1352760475048373, 5.5195407887291426, 2.4961181804217221, 6.7167361922608864, 1.8023823799830032, 16.765472169131161, 5.5195407887291426] >>> ndimage.find_objects(labels==4) [(slice(30, 48, None), slice(30, 48, None))] >>> sl = ndimage.find_objects(labels==4) >>> imshow(sig[sl[0]])
131
132
Following the cumulative probability denition p_i from the previous section, the corresponding values will be:
>>> cprob = (np.arange(years_nb, dtype=np.float32) + 1)/(years_nb + 1)
Prediction with UnivariateSpline In this section the quantile function will be estimated by using the UnivariateSpline class which can represent a spline from points. The default behavior is to build a spline of degree 3 and points can have different weights according to their reliability. Variants are InterpolatedUnivariateSpline and LSQUnivariateSpline on which errors checking is going to change. In case a 2D spline is wanted, the BivariateSpline class family is provided. All those classes for 1D and 2D splines use the FITPACK Fortran subroutines, thats why a lower library access is available through the splrep and splev functions for respectively representing and evaluating a spline. Moreover interpolation functions without the use of FITPACK parameters are also provided for simpler use (see interp1d, interp2d, barycentric_interpolate and so on). For the Sprog maxima wind speeds, the UnivariateSpline will be used because a spline of degree 3 seems to correctly t the data:
>>> from scipy.interpolate import UnivariateSpline >>> quantile_func = UnivariateSpline(cprob, sorted_max_speeds)
See the summary exercise on Image processing application: counting bubbles and unmolten grains (page 142) for a more advanced example.
The quantile function is now going to be evaluated from the full range of probabilities:
>>> nprob = np.linspace(0, 1, 1e2) >>> fitted_max_speeds = quantile_func(nprob)
2% In the current model, the maximum wind speed occurring every 50 years is dened as the upper 2% quantile. As a result, the cumulative probability value will be:
>>> fifty_prob = 1. - 0.02
So the storm wind speed occurring every 50 years can be guessed by:
>>> fifty_wind = quantile_func(fifty_prob) >>> fifty_wind array([ 32.97989825])
133
134
fitted_max_speeds = speed_spline(nprob) fifty_prob = 1. - 0.02 fifty_wind = speed_spline(fifty_prob) pl.figure() pl.plot(sorted_max_speeds, cprob, o) pl.plot(fitted_max_speeds, nprob, g--) pl.plot([fifty_wind], [fifty_prob], o, ms=8., mfc=y, mec=y) pl.text(30, 0.05, $V_{50} = %.2f \, m/s$ % fifty_wind) pl.plot([fifty_wind, fifty_wind], [pl.axis()[2], fifty_prob], k--) pl.xlabel(Annual wind speed maxima [$m/s$]) pl.ylabel(Cumulative probability)
return -np.log(-np.log(arr)) years_nb = 21 wspeeds = np.load(../data/sprog-windspeeds.npy) max_speeds = np.array([arr.max() for arr in np.array_split(wspeeds, years_nb)]) sorted_max_speeds = np.sort(max_speeds) cprob = (np.arange(years_nb, dtype=np.float32) + 1)/(years_nb + 1) gprob = gumbell_dist(cprob) speed_spline = UnivariateSpline(gprob, sorted_max_speeds, k=1) nprob = gumbell_dist(np.linspace(1e-3, 1-1e-3, 1e2)) fitted_max_speeds = speed_spline(nprob) fifty_prob = gumbell_dist(49./50.) fifty_wind = speed_spline(fifty_prob)
pl.figure() pl.bar(np.arange(years_nb) + 1, max_speeds) pl.axis(tight) pl.xlabel(Year) pl.ylabel(Annual wind speed maxima [$m/s$])
22
32
34
Exercise with the Gumbell distribution The interested readers are now invited to make an exercise by using the wind speeds measured over 21 years. The measurement period is around 90 minutes (the original period was around 10 minutes but the le size has been reduced for making the exercise setup easier). The data are stored in numpy format inside the le sprogwindspeeds.npy. Do not look at the source code for the plots until you have completed the exercise. The rst step will be to nd the annual maxima by using numpy and plot them as a matplotlib bar gure.
"""Generate the exercise results on the Gumbell distribution """ import numpy as np from scipy.interpolate import UnivariateSpline import pylab as pl
10
Year
15
20
The second step will be to use the Gumbell distribution on cumulative probabilities p_i dened as -log( -log(p_i) ) for tting a linear quantile function (remember that you can dene the degree of the UnivariateSpline). Plotting the annual maxima versus the Gumbell distribution should give you the following gure.
"""Generate the exercise results on the Gumbell distribution """ import numpy as np from scipy.interpolate import UnivariateSpline import pylab as pl
def gumbell_dist(arr):
135
136
def gumbell_dist(arr): return -np.log(-np.log(arr)) years_nb = 21 wspeeds = np.load(../data/sprog-windspeeds.npy) max_speeds = np.array([arr.max() for arr in np.array_split(wspeeds, years_nb)]) sorted_max_speeds = np.sort(max_speeds) cprob = (np.arange(years_nb, dtype=np.float32) + 1)/(years_nb + 1) gprob = gumbell_dist(cprob) speed_spline = UnivariateSpline(gprob, sorted_max_speeds, k=1) nprob = gumbell_dist(np.linspace(1e-3, 1-1e-3, 1e2)) fitted_max_speeds = speed_spline(nprob) fifty_prob = gumbell_dist(49./50.) fifty_wind = speed_spline(fifty_prob) pl.figure() pl.plot(sorted_max_speeds, gprob, o) pl.plot(fitted_max_speeds, nprob, g--) pl.plot([fifty_wind], [fifty_prob], o, ms=8., mfc=y, mec=y) pl.plot([fifty_wind, fifty_wind], [pl.axis()[2], fifty_prob], k--) pl.text(35, -1, r$V_{50} = %.2f \, m/s$ % fifty_wind) pl.xlabel(Annual wind speed maxima [$m/s$]) pl.ylabel(Gumbell cumulative probability)
6.12.2 Non linear least squares curve tting: application to point extraction in topographical lidar data
The goal of this exercise is to t a model to some data. The data used in this tutorial are lidar data and are described in details in the following introductory paragraph. If youre impatient and want to practice now, please skip it and go directly to Loading and visualization (page 138). Introduction Lidars systems are optical rangenders that analyze property of scattered light to measure distances. Most of them emit a short light impulsion towards a target and record the reected signal. This signal is then processed to extract the distance between the lidar system and the target. Topographical lidar systems are such systems embedded in airborne platforms. They measure distances between the platform and the Earth, so as to deliver information on the Earths topography (see [Mallet09] (page 287) for more details). In this tutorial, the goal is to analyze the waveform recorded by the lidar system 1 . Such a signal contains peaks whose center and amplitude permit to compute the position and some characteristics of the hit target. When the footprint of the laser beam is around 1m on the Earth surface, the beam can hit multiple targets during the two-way propagation (for example the ground and the top of a tree or building). The sum of the contributions of each target hit by the laser beam then produces a complex signal with multiple peaks, each one containing information about one target. One state of the art method to extract information from these data is to decompose them in a sum of Gaussian functions where each function represents the contribution of a target hit by the laser beam. Therefore, we use the scipy.optimize module to t a waveform to one or a sum of Gaussian functions. Loading and visualization Load the rst waveform using:
>>> import numpy as np >>> waveform_1 = np.load(data/waveform_1.npy)
1 The data used for this tutorial are part of the demonstration data available for the FullAnalyze software and were kindly provided by the GIS DRAIX.
40
45
The last step will be to nd 34.23 m/s for the maximum wind speed occurring every 50 years.
137
138
Initial solution
An approximative initial solution that we can nd from looking at the graph is for instance:
>>> x0 = np.array([3, 30, 15, 1], dtype=float)
Fit
scipy.optimize.leastsq minimizes the sum of squares of the function given as an argument. Basically, the function to minimize is the residuals (the difference between the data and the model):
>>> def residuals(coeffs, y, t): ... return y - model(t, coeffs)
So lets get our solution by calling scipy.optimize.leastsq with the following arguments: the function to minimize an initial solution the additional arguments to pass to the function
>>> from scipy.optimize import leastsq >>> x, flag = leastsq(residuals, x0, args=(waveform_1, t)) >>> print x [ 2.70363341 27.82020742 15.47924562 3.05636228]
As you can notice, this waveform is a 80-bin-length signal with a single peak. Fitting a waveform with a simple Gaussian model The signal is very simple and can be modeled as a single Gaussian function and an offset corresponding to the background noise. To t the signal with the function, we must: dene the model propose an initial solution call scipy.optimize.leastsq
Remark: from scipy v0.8 and above, you should rather use scipy.optimize.curve_fit which takes the model and the data as arguments, so you dont need to dene the residuals any more. Going further Try with a more complex waveform (for instance data/waveform_2.npy) that contains three significant peaks. You must adapt the model which is now a sum of Gaussian functions instead of only one Gaussian peak. B + A exp t
2
Model
A Gaussian function dened by
139
140
In some cases, writing an explicit function to compute the Jacobian is faster than letting leastsq estimate it numerically. Create a function to compute the Jacobian of the residuals and use it as an input for leastsq. When we want to detect very small peaks in the signal, or when the initial guess is too far from a good solution, the result given by the algorithm is often not satisfying. Adding constraints to the parameters of the model enables to overcome such limitations. An example of a priori knowledge we can add is the sign of our variables (which are all positive). With the following initial solution:
>>> x0 = np.array([3, 50, 20, 1], dtype=float)
Statement of the problem 1. Open the image le MV_HFV_012.jpg and display it. Browse through the keyword arguments in the docstring of imshow to display the image with the right orientation (origin in the bottom left corner, and not the upper left corner as for standard arrays). This Scanning Element Microscopy image shows a glass sample (light gray matrix) with some bubbles (on black) and unmolten sand grains (dark gray). We wish to determine the fraction of the sample covered by these three phases, and to estimate the typical size of sand grains and bubbles, their sizes, etc. 2. Crop the image to remove the lower panel with measure information. 3. Slightly lter the image with a median lter in order to rene its histogram. Check how the histogram changes.
compare the result of scipy.optimize.leastsq and what scipy.optimize.fmin_slsqp when adding boundary constraints.
you
can
get
with
4. Using the histogram of the ltered image, determine thresholds that allow to dene masks for sand pixels, glass pixels and bubble pixels. Other option (homework): write a function that determines automatically the thresholds from the minima of the histogram. 5. Display an image in which the three phases are colored with three different colors. 6. Use mathematical morphology to clean the different phases. 7. Attribute labels to all bubbles and sand grains, and remove from the sand mask grains that are smaller than 10 pixels. To do so, use ndimage.sum or np.bincount to compute the grain sizes. 8. Compute the mean size of bubbles.
141
142
Proposed solution
6.12.4 Example of solution for the image processing exercise: unmolten grains in glass
1. Open the image le MV_HFV_012.jpg and display it. Browse through the keyword arguments in the docstring of imshow to display the image with the right orientation (origin in the bottom left corner, and not the upper left corner as for standard arrays).
>>> dat = imread(MV_HFV_012.jpg)
4. Using the histogram of the ltered image, determine thresholds that allow to dene masks for sand pixels, glass pixels and bubble pixels. Other option (homework): write a function that determines automatically the thresholds from the minima of the histogram.
>>> void = filtdat <= 50 >>> sand = np.logical_and(filtdat>50, filtdat<=114) >>> glass = filtdat > 114
2. Crop the image to remove the lower panel with measure information.
>>> dat = dat[60:]
3. Slightly lter the image with a median lter in order to rene its histogram. Check how the histogram changes.
>>> filtdat = ndimage.median_filter(dat, size=(7,7)) >>> hi_dat = np.histogram(dat, bins=np.arange(256)) >>> hi_filtdat = np.histogram(filtdat, bins=np.arange(256))
5. Display an image in which the three phases are colored with three different colors.
>>> phases = void.astype(np.int) + 2*glass.astype(np.int) +\ 3*sand.astype(np.int)
143
144
7. Attribute labels to all bubbles and sand grains, and remove from the sand mask grains that are smaller than 10 pixels. To do so, use ndimage.sum or np.bincount to compute the grain sizes.
>>> >>> ... >>> >>> sand_labels, sand_nb = ndimage.label(sand_op) sand_areas = np.array(ndimage.sum(sand_op, sand_labels,\ np.arange(sand_labels.max()+1))) mask = sand_areas>100 remove_small_sand = mask[sand_labels.ravel()].reshape(sand_labels.shape)
145
146
CHAPTER 7
Part II
author Zbigniew J drzejewski-Szmek e This chapter is about some features of the Python language which can be considered advanced in the sense that not every language has them, and also in the sense that they are more useful in more complicated programs or libraries, but not in the sense of being particularly specialized, or particularly complicated. It is important to underline that this chapter is purely about the language itself about features supported through special syntax complemented by functionality of the Python stdlib, which could not be implemented through clever external modules. The process of developing the Python programming language, its syntax, is unique because it is very transparent, proposed changes are evaluated from various angles and discussed on public mailing lists, and the nal decision takes into account the balance between the importance of envisioned use cases, the burden of carrying more language features, consistency with the rest of the syntax, and whether the proposed variant is the easiest to read, write, and understand. This process is formalised in Python Enhancement Proposals PEPs. As a result, features described in this chapter were added after it was shown that they indeed solve real problems and that their use is as simple as possible. Chapters contents Iterators, generator expressions and generators (page 149) Iterators (page 149) Generator expressions (page 150) Generators (page 150) Bidirectional communication (page 151) Chaining generators (page 153) Decorators (page 153) Replacing or tweaking the original object (page 154) Decorators implemented as classes and as functions (page 154) Copying the docstring and other attributes of the original function (page 156) Examples in the standard library (page 157) Deprecation of functions (page 159) A while-loop removing decorator (page 159) A plugin registration system (page 160) More examples and reading (page 161) Context managers (page 161) Catching exceptions (page 162) Using generators to dene context managers (page 163)
Advanced topics
147
148
In Python 2.7 and 3.x the list comprehension syntax was extended to dictionary and set comprehensions. A set is created when the generator expression is enclosed in curly braces. A dict is created when the generator expression contains pairs of the form key:value:
>>> {i for i in range(3)} set([0, 1, 2]) >>> {i:i**2 for i in range(3)} {0: 0, 1: 1, 2: 4}
If you are stuck at some previous Python version, the syntax is only a bit worse:
>>> set(i for i in abc) set([a, c, b]) >>> dict((i, ord(i)) for i in abc) {a: 97, c: 99, b: 98}
Generator expression are fairly simple, not much to say here. Only one gotcha should be mentioned: in old Pythons the index variable (i) would leak, and in versions >= 3 this is xed.
7.1.3 Generators
Generators A generator is a function that produces a sequence of results instead of a single value. David Beazley A Curious Course on Coroutines and Concurrency A third way to create iterator objects is to call a generator function. A generator is a function containing the keyword yield. It must be noted that the mere presence of this keyword completely changes the nature of the function: this yield statement doesnt have to be invoked, or even reachable, but causes the function to be marked as a generator. When a normal function is called, the instructions contained in the body start to be executed. When a generator is called, the execution stops before the rst instruction in the body. An invocation of a generator function creates a generator object, adhering to the iterator protocol. As with normal function invocations, concurrent and recursive invocations are allowed. When next is called, the function is executed until the rst yield. Each encountered yield statement gives a value becomes the return value of next. After executing the yield statement, the execution of this function is suspended.
>>> def f(): ... yield 1 ... yield 2 >>> f() <generator object f at 0x...> >>> gen = f() >>> gen.next() 1
When used in a loop, StopIteration is swallowed and causes the loop to nish. But with explicit invocation, we can see that once the iterator is exhausted, accessing it raises an exception. Using the for..in loop also uses the __iter__ method. This allows us to transparently start the iteration over a sequence. But if we already have the iterator, we want to be able to use it in an for loop in the same way. In order to achieve this, iterators in addition to next are also required to have a method called __iter__ which returns the iterator (self). Support for iteration is pervasive in Python: all sequences and unordered containers in the standard library allow this. The concept is also stretched to other things: e.g. file objects support iteration over lines.
>>> f = open(/etc/fstab) >>> f is f.__iter__() True
The file is an iterator itself and its __iter__ method doesnt create a separate object: only a single thread of sequential access is allowed.
149
150
>>> gen.next() 2 >>> gen.next() Traceback (most recent call last): File "<stdin>", line 1, in <module> StopIteration
The rst of the new methods is send(value), which is similar to next(), but passes value into the generator to be used for the value of the yield expression. In fact, g.next() and g.send(None) are equivalent. The second of the new methods is throw(type, value=None, traceback=None) which is equivalent to:
raise type, value, traceback
Lets go over the life of the single invocation of the generator function.
>>> def f(): ... print("-... yield 3 ... print("-... yield 4 ... print("->>> gen = f() >>> next(gen) -- start -3 >>> next(gen) -- middle -4 >>> next(gen) -- finished -Traceback (most ... StopIteration start --") middle --") finished --")
at the point of the yield statement. Unlike raise (which immediately raises an exception from the current execution point), throw() rst resumes the generator, and only then raises the exception. The word throw was picked because it is suggestive of putting the exception in another location, and is associated with exceptions in other languages. What happens when an exception is raised inside the generator? It can be either raised explicitly or when executing some statements or it can be injected at the point of a yield statement by means of the throw() method. In either case, such an exception propagates in the standard manner: it can be intercepted by an except or finally clause, or otherwise it causes the execution of the generator function to be aborted and propagates in the caller. For completeness sake, its worth mentioning that generator iterators also have a close() method, which can be used to force a generator that would otherwise be able to provide more values to nish immediately. It allows the generator __del__ method to destroy objects holding the state of generator. Lets dene a generator which just prints what is passed in through send and throw.
Contrary to a normal function, where executing f() would immediately cause the rst print to be executed, gen is assigned without executing any statements in the function body. Only when gen.next() is invoked by next, the statements up to the rst yield are executed. The second next prints -- middle -- and execution halts on the second yield. The third next prints -- finished -- and falls of the end of the function. Since no yield was reached, an exception is raised. What happens with the function after a yield, when the control passes to the caller? The state of each generator is stored in the generator object. From the point of view of the generator function, is looks almost as if it was running in a separate thread, but this is just an illusion: execution is strictly single-threaded, but the interpreter keeps and restores the state in between the requests for the next value. Why are generators useful? As noted in the parts about iterators, a generator function is just a different way to create an iterator object. Everything that can be done with yield statements, could also be done with next methods. Nevertheless, using a function and having the interpreter perform its magic to create an iterator has advantages. A function can be much shorter than the denition of a class with the required next and __iter__ methods. What is more important, it is easier for the author of the generator to understand the state which is kept in local variables, as opposed to instance attributes, which have to be used to pass data between consecutive invocations of next on an iterator object. A broader question is why are iterators useful? When an iterator is used to power a loop, the loop becomes very simple. The code to initialise the state, to decide if the loop is nished, and to nd the next value is extracted into a separate place. This highlights the body of the loop the interesting part. In addition, it is possible to reuse the iterator code in other places.
>>> import itertools >>> def g(): ... print --start-- ... for i in itertools.count(): ... print --yielding {}--.format(i) ... try: ... ans = yield i ... except GeneratorExit: ... print --closing-- ... raise ... except Exception as e: ... print --yield raised {!r}--.format(e) ... else: ... print --yield returned {!r}--.format(ans) >>> it = g() >>> next(it) --start---yielding 0-0 >>> it.send(11) --yield returned 11---yielding 1-1 >>> it.throw(IndexError) --yield raised IndexError()---yielding 2-2 >>> it.close() --closing--
Note: next or __next__? In Python 2.x, the iterator method to retrieve the next value is called next. It is invoked implicitly through the global function next, which means that it should be called __next__. Just like the global function iter calls __iter__. This inconsistency is corrected in Python 3.x, where it.next becomes it.__next__. For other generator methods send and throw the situation is more complicated, because they are not called implicitly by the interpreter. Nevertheless, theres a proposed syntax extension to allow continue to take an argument which will be passed to send of the loops iterator. If this extension is accepted, its likely that 7.1. Iterators, generator expressions and generators 152
gen.send will become gen.__send__. The last of generator methods, close, is pretty obviously named incorrectly, because it is already invoked implicitly.
the return value to the name of the function. This sound like more typing, and it is, and also the name of the decorated function dubbling as a temporary variable must be used at least three times, which is prone to errors. Nevertheless, the example above is equivalent to:
def function(): pass function = decorator(function) # #
Decorators can be stacked the order of application is bottom-to-top, or inside-out. The semantics are such that the originally dened function is used as an argument for the rst decorator, whatever is returned by the rst decorator is used as an argument for the second decorator, ..., and whatever is returned by the last decorator is attached under the name of the original function. The decorator syntax was chosen for its readability. Since the decorator is specied before the header of the function, it is obvious that its is not a part of the function body and its clear that it can only operate on the whole function. Because the expression is prexed with @ is stands out and is hard to miss (in your face, according to the PEP :) ). When more than one decorator is applied, each one is placed on a separate line in an easy to read way.
However, if the subgenerator is to interact properly with the caller in the case of calls to send(), throw() and close(), things become considerably more difcult. The yield statement has to be guarded by a try..except..nally structure similar to the one dened in the previous section to debug the generator function. Such code is provided in :pep:380#id13, here it sufces to say that new syntax to properly yield from a subgenerator is being introduced in Python 3.3:
yield from some_other_generator()
This behaves like the explicit loop above, repeatedly yielding values from some_other_generator until it is exhausted, but also forwards send, throw and close to the subgenerator.
7.2 Decorators
Summary This amazing feature appeared in the language almost apologetically and with concern that it might not be that useful. Bruce Eckel An Introduction to Python Decorators Since a function or a class are objects, they can be passed around. Since they are mutable objects, they can be modied. The act of altering a function or class object after it has been constructed but before is is bound to its name is called decorating. There are two things hiding behind the name decorator one is the function which does the work of decorating, i.e. performs the real work, and the other one is the expression adhering to the decorator syntax, i.e. an at-symbol and the name of the decorating function. Function can be decorated by using the decorator syntax for functions:
@decorator def function(): pass # #
A function is dened in the standard way. An expression starting with @ placed before the function denition is the decorator . The part after @ must be a simple expression, usually this is just the name of a function or class. This part is evaluated rst, and after the function dened below is ready, the decorator is called with the newly dened function object as the single argument. The value returned by the decorator is attached to the original name of the function. Decorators can be applied to functions and to classes. For classes the semantics are identical the original class denition is used as an argument to call the decorator and whatever is returned is assigned under the original name. Before the decorator syntax was implemented (PEP 318), it was possible to achieve the same effect by assigning the function or class object to a temporary variable and then invoking the decorator explicitly and then assigning 7.2. Decorators 153
7.2. Decorators
154
... return _decorator >>> @decorator_with_arguments("abc") ... def function(): ... print "inside function" defining the decorator doing decoration, abc >>> function() inside function
Contrary to normal rules (PEP 8) decorators written as classes behave more like functions and therefore their name often starts with a lowercase letter. In reality, it doesnt make much sense to create a new class just to have a decorator which returns the original function. Objects are supposed to hold state, and such decorators are more useful when the decorator returns a new object.
>>> class replacing_decorator_class(object): ... def __init__(self, arg): ... # this method is called in the decorator expression ... print "in decorator init,", arg ... self.arg = arg ... def __call__(self, function): ... # this method is called to do the job ... print "in decorator call,", self.arg ... self.function = function ... return self._wrapper ... def _wrapper(self, *args, **kwargs): ... print "in the wrapper,", args, kwargs ... return self.function(*args, **kwargs) >>> deco_instance = replacing_decorator_class(foo) in decorator init, foo >>> @deco_instance ... def function(*args, **kwargs): ... print "in function,", args, kwargs in decorator call, foo >>> function(11, 12) in the wrapper, (11, 12) {} in function, (11, 12) {}
The two trivial decorators above fall into the category of decorators which return the original function. If they were to return a new function, an extra level of nestedness would be required. In the worst case, three levels of nested functions.
>>> def replacing_decorator_with_args(arg): ... print "defining the decorator" ... def _decorator(function): ... # in this inner function, arg is available too ... print "doing decoration,", arg ... def _wrapper(*args, **kwargs): ... print "inside wrapper,", args, kwargs ... return function(*args, **kwargs) ... return _wrapper ... return _decorator >>> @replacing_decorator_with_args("abc") ... def function(*args, **kwargs): ... print "inside function,", args, kwargs ... return 14 defining the decorator doing decoration, abc >>> function(11, 12) inside wrapper, (11, 12) {} inside function, (11, 12) {} 14
A decorator like this can do pretty much anything, since it can modify the original function object and mangle the arguments, call the original function or not, and afterwards mangle the return value.
The _wrapper function is dened to accept all positional and keyword arguments. In general we cannot know what arguments the decorated function is supposed to accept, so the wrapper function just passes everything to the wrapped function. One unfortunate consequence is that the apparent argument list is misleading. Compared to decorators dened as functions, complex decorators dened as classes are simpler. When an object is created, the __init__ method is only allowed to return None, and the type of the created object cannot be changed. This means that when a decorator is dened as a class, it doesnt make much sense to use the argumentless form: the nal decorated object would just be an instance of the decorating class, returned by the constructor call, which is not very useful. Therefore its enough to discuss class-based decorators where arguments are given in the decorator expression and the decorator __init__ method is used for decorator construction.
>>> class decorator_class(object): ... def __init__(self, arg): ... # this method is called in the decorator expression ... print "in decorator init,", arg ... self.arg = arg ... def __call__(self, function): ... # this method is called to do the job ... print "in decorator call,", self.arg ... return function >>> deco_instance = decorator_class(foo) in decorator init, foo >>> @deco_instance ... def function(*args, **kwargs): ... print "in function,", args, kwargs in decorator call, foo >>> function() in function, () {}
7.2.3 Copying the docstring and other attributes of the original function
When a new function is returned by the decorator to replace the original function, an unfortunate consequence is that the original function name, the original docstring, the original argument list are lost. Those attributes of the original function can partially be transplanted to the new function by setting __doc__ (the docstring), __module__ and __name__ (the full name of the function), and __annotations__ (extra information about arguments and the return value of the function available in Python 3). This can be done automatically by using functools.update_wrapper. functools.update_wrapper(wrapper, wrapped) Update a wrapper function to look like the wrapped function.
>>> >>> ... ... ... ... ... ... ... ... >>> ... ... ... import functools def better_replacing_decorator_with_args(arg): print "defining the decorator" def _decorator(function): print "doing decoration,", arg def _wrapper(*args, **kwargs): print "inside wrapper,", args, kwargs return function(*args, **kwargs) return functools.update_wrapper(_wrapper, function) return _decorator @better_replacing_decorator_with_args("abc") def function(): "extensive documentation" print "inside function"
7.2. Decorators
155
7.2. Decorators
156
... return 14 defining the decorator doing decoration, abc >>> function <function function at 0x...> >>> print function.__doc__ extensive documentation
has the side effect of making it read-only, because no setter is dened. To have a setter and a getter, two methods are required, obviously. Since Python 2.6 the following syntax is preferred:
class Rectangle(object): def __init__(self, edge): self.edge = edge @property def area(self): """Computed area. Setting this updates the edge length to the proper value. """ return self.edge**2 @area.setter def area(self, area): self.edge = area ** 0.5
One important thing is missing from the list of attributes which can be copied to the replacement function: the argument list. The default values for arguments can be modied through the __defaults__, __kwdefaults__ attributes, but unfortunately the argument list itself cannot be set as an attribute. This means that help(function) will display a useless argument list which will be confusing for the user of the function. An effective but ugly way around this problem is to create the wrapper dynamically, using eval. This can be automated by using the external decorator module. It provides support the decorator decorator, which takes a wrapper and turns it into a decorator which preserves the function signature. To sum things up, decorators should always use functools.update_wrapper or some other means of copying function attributes.
The way that this works, is that the property decorator replaces the getter method with a property object. This object in turn has three methods, getter, setter, and deleter, which can be used as decorators. Their job is to set the getter, setter and deleter of the property object (stored as attributes fget, fset, and fdel). The getter can be set like in the example above, when creating the object. When dening the setter, we already have the property object under area, and we add the setter to it by using the setter method. All this happens when we are creating the class. Afterwards, when an instance of the class has been created, the property object is special. When the interpreter executes attribute access, assignment, or deletion, the job is delegated to the methods of the property object. To make everything crystal clear, lets dene a debug example:
>>> class D(object): ... @property ... def a(self): ... print "getting", 1 ... return 1 ... @a.setter ... def a(self, value): ... print "setting", value ... @a.deleter ... def a(self): ... print "deleting" >>> D.a <property object at 0x...> >>> D.a.fget <function a at 0x...> >>> D.a.fset <function a at 0x...> >>> D.a.fdel <function a at 0x...> >>> d = D() # ... varies, this is not the same a function >>> d.a getting 1 1 >>> d.a = 2 setting 2 >>> del d.a deleting >>> d.a getting 1 1
This is cleaner then using a multitude of ags to __init__. staticmethod is applied to methods to make them static, i.e. basically a normal function, but accessible through the class namespace. This can be useful when the function is only needed inside this class (its name would then be prexed with _), or when we want the user to think of the method as connected to the class, despite an implementation which doesnt require this. property is the pythonic answer to the problem of getters and setters. A method decorated with property becomes a getter which is automatically called on attribute access.
>>> class A(object): ... @property ... def a(self): ... "an important attribute" ... return "a value" >>> A.a <property object at 0x...> >>> A().a a value
In this example, A.a is an read-only attribute. It is also documented: help(A) includes the docstring for attribute a taken from the getter method. Dening a as a property allows it to be a calculated on the y, and 7.2. Decorators 157
7.2. Decorators
158
Properties are a bit of a stretch for the decorator syntax. One of the premises of the decorator syntax that the name is not duplicated is violated, but nothing better has been invented so far. It is just good style to use the same name for the getter, setter, and deleter methods. Some newer examples include: functools.lru_cache memoizes an arbitrary function maintaining a limited cache of arguments:answer pairs (Python 3.2) functools.total_ordering is a class decorator which lls in missing ordering methods (__lt__, __gt__, __le__, ...) based on a single available one (Python 2.7).
def find_answers(): answers = [] while True: ans = look_for_next_answer() if ans is None: break answers.append(ans) return answers
This is ne, as long as the body of the loop is fairly compact. Once it becomes more complicated, as often happens in real code, this becomes pretty unreadable. We could simplify this by using yield statements, but then the user would have to explicitly call list(find_answers()). We can dene a decorator which constructs the list for us:
def vectorized(generator_func): def wrapper(*args, **kwargs): return list(generator_func(*args, **kwargs)) return functools.update_wrapper(wrapper, generator_func)
Here we use a decorator to decentralise the registration of plugins. We call our decorator with a noun, instead of a verb, because we use it to declare that our class is a plugin for WordProcessor. Method plugin simply appends the class to the list of plugins. A word about the plugin itself: it replaces HTML entity for em-dash with a real Unicode em-dash character. It exploits the unicode literal notation to insert a character by using its name in the unicode database (EM DASH). If the Unicode character was inserted directly, it would be impossible to distinguish it from an en-dash in the source of a program.
7.2. Decorators
159
7.2. Decorators
160
The common use for try..finally is releasing resources. Various different cases are implemented similarly: in the __enter__ phase the resource is acquired, in the __exit__ phase it is released, and the exception, if thrown, is propagated. As with les, theres often a natural operation to perform after the object has been used and it is most convenient to have the support built in. With each release, Python provides support in more places: all le-like objects: file automatically closed fileinput, tempfile (py >= 3.2) bz2.BZ2File, gzip.GzipFile, tarfile.TarFile, zipfile.ZipFile ftplib, nntplib close connection (py >= 3.2 or 3.3) locks multiprocessing.RLock lock and unlock multiprocessing.Semaphore memoryview automatically release (py >= 3.2 and 2.7) decimal.localcontext modify precision of computations temporarily _winreg.PyHKEY open and close hive key warnings.catch_warnings kill warnings temporarily contextlib.closing the same as the example above, call close parallel programming concurrent.futures.ThreadPoolExecutor invoke in parallel then kill thread pool (py >= 3.2) concurrent.futures.ProcessPoolExecutor invoke in parallel then kill process pool (py >= 3.2) nogil solve the GIL problem temporarily (cython only :( )
In other words, the context manager protocol dened in PEP 343 permits the extraction of the boring part of a try..except..nally structure into a separate class leaving only the interesting do_something block. 1. The __enter__ method is called rst. It can return a value which will be assigned to var. The as-part is optional: if it isnt present, the value returned by __enter__ is simply ignored. 2. The block of code underneath with is executed. Just like with try clauses, it can either execute successfully to the end, or it can break, continue or return, or it can throw an exception. Either way, after the block is nished, the __exit__ method is called. If an exception was thrown, the information about the exception is passed to __exit__, which is described below in the next subsection. In the normal case, exceptions can be ignored, just like in a finally clause, and will be rethrown after __exit__ is nished. Lets say we want to make sure that a le is closed immediately after we are done writing to it:
>>> class closing(object): ... def __init__(self, obj): ... self.obj = obj ... def __enter__(self): ... return self.obj ... def __exit__(self, *args): ... self.obj.close() >>> with closing(open(/tmp/file, w)) as f: ... f.write(the contents\n)
Here we have made sure that the f.close() is called when the with block is exited. Since closing les is such a common operation, the support for this is already present in the file class. It has an __exit__ method which calls close and can be used as a context manager itself:
161
162
Advanced Numpy
author Pauli Virtanen Numpy is at the base of Pythons scientic stack of tools. Its purpose is simple: implementing efcient operations on many items in a block of memory. Understanding how it works in detail helps in making efcient use of its exibility, taking useful shortcuts, and in building new work based on it. This tutorial aims to cover: Anatomy of Numpy arrays, and its consequences. Tips and tricks. Universal functions: what, why, and what to do if you want a new one. Integration with other tools: Numpy offers several ways to wrap any data in an ndarray, without unnecessary copies. Recently added features, and whats in them for me: PEP 3118 buffers, generalized ufuncs, ... Prerequisites Numpy (>= 1.2; preferably newer...) Cython (>= 0.12, for the Ufunc example) PIL (used in a couple of examples) Source codes: http://pav.iki./tmp/advnumpy-ex.zip
The contextlib.contextmanager helper takes a generator and turns it into a context manager. The generator has to obey some rules which are enforced by the wrapper function most importantly it must yield exactly once. The part before the yield is executed from __enter__, the block of code protected by the context manager is executed when the generator is suspended in yield, and the rest is executed in __exit__. If an exception is thrown, the interpreter hands it to the wrapper through __exit__ arguments, and the wrapper function then throws it at the point of the yield statement. Through the use of generators, the context manager is shorter and simpler. Lets rewrite the closing example as a generator:
@contextlib.contextmanager def closing(obj): try: yield obj finally: obj.close()
163
164
Life of ndarray (page 165) Its... (page 165) Block of memory (page 166) Data types (page 167) Indexing scheme: strides (page 171) Findings in dissection (page 177) Universal functions (page 177) What they are? (page 177) Exercise: building an ufunc from scratch (page 178) Solution: building an ufunc from scratch (page 181) Generalized ufuncs (page 184) Interoperability features (page 186) Sharing multidimensional, typed data (page 186) The old buffer protocol (page 186) The old buffer protocol (page 186) Array interface protocol (page 187) The new buffer protocol: PEP 3118 (page 188) PEP 3118 details (page 189) Siblings: chararray, maskedarray, matrix (page 193) Summary (page 194) Contributing to Numpy/Scipy (page 194) Why (page 194) Reporting bugs (page 194) Contributing to documentation (page 195) Contributing features (page 196) How to help, in general (page 197)
/* Block of memory */ char *data; /* Data type descriptor */ PyArray_Descr *descr; /* Indexing scheme */ int nd; npy_intp *dimensions; npy_intp *strides; /* Other stuff */ PyObject *base; int flags; PyObject *weakreflist; } PyArrayObject;
The owndata and writeable ags indicate status of the memory block.
165
166
The descriptor dtype describes a single item in the array: type itemsize byteorder elds shape scalar type of the data, one of: int8, int16, oat64, et al. (xed size) str, unicode, void (exible size) size of the data block byte order: big-endian > / little-endian < / not applicable | sub-dtypes, if its a structured data type shape of the array, if its a sub-array
The rst element is the sub-dtype in the structured data, corresponding to the name format The second one is its offset (in bytes) from the beginning of the item Note: Mini-exercise, make a sparse dtype by using offsets, and only some of the elds:
>>> wav_header_dtype = np.dtype(dict( names=[format, sample_rate, data_id], offsets=[offset_1, offset_2, offset_3], # counted from start of structure in bytes formats=list of dtypes for each of the fields, ))
Example: reading .wav les The .wav le header: chunk_id chunk_size format fmt_id fmt_size audio_fmt num_channels sample_rate byte_rate block_align bits_per_sample data_id data_size "RIFF" 4-byte unsigned little-endian integer "WAVE" "fmt " 4-byte unsigned little-endian integer 2-byte unsigned little-endian integer 2-byte unsigned little-endian integer 4-byte unsigned little-endian integer 4-byte unsigned little-endian integer 2-byte unsigned little-endian integer 2-byte unsigned little-endian integer "data" 4-byte unsigned little-endian integer
and use that to read the sample rate, and data_id (as sub-array).
>>> f = open(test.wav, r) >>> wav_header = np.fromfile(f, dtype=wav_header_dtype, count=1) >>> f.close() >>> print(wav_header) [ (RIFF, 17402L, WAVE, fmt , 16L, 1, 1, 16000L, 32000L, 2, 16, [[d, a], [t, a]], 17366L)] >>> wav_header[sample_rate] array([16000], dtype=uint32)
44-byte block of raw data (in the beginning of the le) ... followed by data_size bytes of actual sound data. The .wav le header as a Numpy structured data type:
>>> wav_header_dtype = np.dtype([ ("chunk_id", (str, 4)), # flexible-sized scalar type, item size 4 ("chunk_size", "<u4"), # little-endian unsigned 32-bit integer ("format", "S4"), # 4-byte string ("fmt_id", "S4"), ("fmt_size", "<u4"), ("audio_fmt", "<u2"), # ("num_channels", "<u2"), # .. more of the same ... ("sample_rate", "<u4"), # ("byte_rate", "<u4"), ("block_align", "<u2"), ("bits_per_sample", "<u2"), ("data_id", ("S1", (2, 2))), # sub-array, just for fun! ("data_size", "u4"), # # the sound data itself cannot be represented here:
When accessing sub-arrays, the dimensions get added to the end! Note: There are existing modules such as wavfile, audiolab, etc. for loading sound data... Casting and re-interpretation/views casting on assignment on array construction on arithmetic etc. and manually: .astype(dtype) data re-interpretation
167
168
manually: .view(dtype)
0x01
0x02
||
0x03
0x04
Casting
Casting in arithmetic, in nutshell: only type (not value!) of operands matters largest safe type able to represent both is picked scalars can lose to arrays in some situations Casting in general copies data
>>> x = np.array([1, 2, 3, 4], dtype=np.float) >>> x array([ 1., 2., 3., 4.]) >>> y = x.astype(np.int8) >>> y array([1, 2, 3, 4], dtype=int8) >>> y + 1 array([2, 3, 4, 5], dtype=int8) >>> y + 256 array([1, 2, 3, 4], dtype=int8) >>> y + 256.0 array([ 257., 258., 259., 260.]) >>> y + np.array([256], dtype=np.int32) array([258, 259, 260, 261])
0x01 Note:
0x02
0x03
0x04
.view() makes views, does not copy (or alter) the memory block only changes the dtype (and adjusts array shape)
>>> x[1] = 5 >>> y array([328193]) >>> y.base is x True
Re-interpretation / viewing
Data block in memory (4 bytes) 0x01 || 0x02 || 0x03 || 0x04 4 of uint8, OR, 4 of int8, OR, 2 of int16, OR, 1 of int32, OR, 1 of oat32, OR, ... How to switch from one to another? 1. Switch the dtype:
>>> x = np.array([1, 2, 3, 4], dtype=np.uint8) >>> x.dtype = "<i2" >>> x array([ 513, 1027], dtype=int16) >>> 0x0201, 0x0403 (513, 1027)
where the last three dimensions are the R, B, and G, and alpha channels. How to make a (10, 10) structured array with eld names r, g, b, a without copying data?
>>> y = ... >>> >>> >>> >>> assert assert assert assert (y[r] (y[g] (y[b] (y[a] == == == == 1).all() 2).all() 3).all() 4).all()
Solution
>>> y = x.view([(r, i1), (g, i1), (b, i1), (a, i1)] )[:,:,0]
169
170
Need to jump 6 bytes to nd the next row Need to jump 2 bytes to nd the next column
>>> y = np.array(x, order=F) >>> y.strides (2, 6) >>> str(y.data) \x01\x00\x04\x00\x07\x00\x02\x00\x05\x00\x08\x00\x03\x00\x06\x00\t\x00
Need to jump 2 bytes to nd the next row Need to jump 6 bytes to nd the next column Similarly to higher dimensions: C: last dimensions vary fastest (= smaller strides) F: rst dimensions vary fastest shape = (d1 , d2 , ..., dn ) strides = (s1 , s2 , ..., sn ) sC = dj+1 dj+2 ...dn itemsize j sF = d1 d2 ...dj1 itemsize j
What happened? ... we need to look into what x[0,1] actually means
>>> 0x0301, 0x0402 (769, 1026)
Transposition does not affect the memory layout of the data, only strides
>>> (2, >>> (1, x.strides 1) y.strides 2)
At which byte in x.data does the item x[1,2] begin? The answer (in Numpy) strides: the number of bytes to jump to nd the next element 1 stride per dimension
>>> x.strides (3, 1) >>> byte_offset = 3*1 + 1*2 >>> x.data[byte_offset] \x06 >>> x[1,2] 6
simple, exible
171
172
>>> y.strides (-4,) >>> y = x[2:] >>> y.__array_interface__[data][0] - x.__array_interface__[data][0] 8 >>> x = np.zeros((10, 10, 10), dtype=np.float) >>> x.strides (800, 80, 8) >>> x[::2,::3,::4].strides (1600, 240, 32)
Broadcasting Doing something useful with it: outer product of [1, 2, 3, 4] and [5, 6, 7]
>>> x = np.array([1, 2, 3, 4], dtype=np.int16) >>> x2 = as_strided(x, strides=(0, 1*2), shape=(3, 4)) >>> x2 array([[1, 2, 3, 4], [1, 2, 3, 4], [1, 2, 3, 4]], dtype=int16) >>> y = np.array([5, 6, 7], dtype=np.int16) >>> y2 = as_strided(y, strides=(1*2, 0), shape=(3, 4)) >>> y2 array([[5, 5, 5, 5], [6, 6, 6, 6], [7, 7, 7, 7]], dtype=int16) >>> x2 * array([[ [ [ y2 5, 10, 15, 20], 6, 12, 18, 24], 7, 14, 21, 28]], dtype=int16)
Stride manipulation
>>> from numpy.lib.stride_tricks import as_strided >>> help(as_strided) as_strided(x, shape=None, strides=None) Make an ndarray from the given array with the given shape and strides
Warning: as_strided does not check that you stay inside the memory block bounds...
>>> x = np.array([1, 2, 3, 4], dtype=np.int16) >>> as_strided(x, strides=(2*2,), shape=(2,)) array([1, 3], dtype=int16) >>> x[::2] array([1, 3], dtype=int16)
Internally, array broadcasting is indeed implemented using 0-strides. More tricks: diagonals See Also: stride-diagonals.py Challenge Pick diagonal entries of the matrix: (assume C memory order)
>>> x = np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]], dtype=np.int32) >>> x_diag = as_strided(x, shape=(3,), strides=(???,))
Pick the rst super-diagonal entries [2, 6]. And the sub-diagonals? (Hint to the last two: slicing rst moves the point where striding starts from.) Solution Pick diagonals:
>>> x_diag = as_strided(x, shape=(3,), strides=((3+1)*x.itemsize,)) >>> x_diag array([1, 5, 9])
173
174
Note:
>>> y = np.diag(x, k=1) >>> y array([2, 6])
However,
>>> y.flags.owndata True
CPU pulls data from main memory to its cache in blocks If many array items consecutively operated on t in a single block (small stride): fewer transfers needed See Also: faster
It makes a copy?! Room for improvement... (bug #xxx) See Also: stride-diagonals.py Challenge Compute the tensor trace
>>> x = np.arange(5*5*5*5).reshape(5,5,5,5) >>> s = 0 >>> for i in xrange(5): >>> for j in xrange(5): >>> s += x[j,i,j,i]
numexpr is designed to mitigate cache effects in array computing. Example: inplace operations (caveat emptor) Sometimes,
>>> a -= b
Solution
>>> y = as_strided(x, shape=(5, 5), strides=((5*5*5+5)*x.itemsize, (5*5+1)*x.itemsize)) >>> s2 = y.sum()
x and x.transpose() share data x -= x.transpose() modies the data element-by-element... because x and x.transpose() have different striding, modied data re-appears on the RHS
175
176
memory block: may be shared, .base, .data data type descriptor: structured data, sub-arrays, byte order, casting, viewing, .astype(), .view() strided indexing: strides, C/F-order, slicing w/ integers, as_strided, broadcasting, stride tricks, diag, CPU cache coherence
PyObject *python_ufunc = PyUFunc_FromFuncAndData( ufunc_loop, NULL, types, 1, /* ntypes */ 2, /* num_inputs */ 1, /* num_outputs */ identity_element, name, docstring, unused)
Automatically support: broadcasting, casting, ... The author of an ufunc only has to supply the elementwise operation, Numpy takes care of the rest. The elementwise operation needs to be implemented in C (or, e.g., Cython)
Parts of an Ufunc
1. Provided by user
void ufunc_loop(void **args, int *dimensions, int *steps, void *data) { /* * int8 output = elementwise_function(int8 input_1, int8 input_2) * * This function must compute the ufunc for many values at once, * in the way shown below. */ char *input_1 = (char*)args[0]; char *input_2 = (char*)args[1]; char *output = (char*)args[2]; int i; for (i = 0; i < dimensions[0]; ++i) { *output = elementwise_function(*input_1, *input_2); input_1 += steps[0];
say, 100 iterations or until z.real**2 + z.imag**2 > 1000. Use it to determine which c are in the Mandelbrot set. Our function is a simple one, so make use of the PyUFunc_* helpers. 177 8.2. Universal functions 178
# # # # #
Boilerplate Cython definitions The litany below is particularly long, but you dont really need to read this part; it just pulls in stuff from the Numpy C headers. ----------------------------------------------------------
cdef extern from "numpy/arrayobject.h": void import_array() ctypedef int npy_intp cdef enum NPY_TYPES: NPY_DOUBLE NPY_CDOUBLE NPY_LONG cdef extern from "numpy/ufuncobject.h": void import_ufunc() ctypedef void (*PyUFuncGenericFunction)(char**, npy_intp*, npy_intp*, void*) object PyUFunc_FromFuncAndData(PyUFuncGenericFunction* func, void** data, char* types, int ntypes, int nin, int nout, int identity, char* name, char* doc, int c) # List of pre-defined loop functions void void void void void void void void void void void void void void void void PyUFunc_f_f_As_d_d(char** args, npy_intp* dimensions, npy_intp* steps, void* func) PyUFunc_d_d(char** args, npy_intp* dimensions, npy_intp* steps, void* func) PyUFunc_f_f(char** args, npy_intp* dimensions, npy_intp* steps, void* func) PyUFunc_g_g(char** args, npy_intp* dimensions, npy_intp* steps, void* func) PyUFunc_F_F_As_D_D(char** args, npy_intp* dimensions, npy_intp* steps, void* func) PyUFunc_F_F(char** args, npy_intp* dimensions, npy_intp* steps, void* func) PyUFunc_D_D(char** args, npy_intp* dimensions, npy_intp* steps, void* func) PyUFunc_G_G(char** args, npy_intp* dimensions, npy_intp* steps, void* func) PyUFunc_ff_f_As_dd_d(char** args, npy_intp* dimensions, npy_intp* steps, void* func) PyUFunc_ff_f(char** args, npy_intp* dimensions, npy_intp* steps, void* func) PyUFunc_dd_d(char** args, npy_intp* dimensions, npy_intp* steps, void* func) PyUFunc_gg_g(char** args, npy_intp* dimensions, npy_intp* steps, void* func) PyUFunc_FF_F_As_DD_D(char** args, npy_intp* dimensions, npy_intp* steps, void* func) PyUFunc_DD_D(char** args, npy_intp* dimensions, npy_intp* steps, void* func) PyUFunc_FF_F(char** args, npy_intp* dimensions, npy_intp* steps, void* func) PyUFunc_GG_G(char** args, npy_intp* dimensions, npy_intp* steps, void* func)
Some points of note: - Its *NOT* allowed to call any Python functions here. The Ufunc loop runs with the Python Global Interpreter Lock released. Hence, the nogil. - And so all local variables must be declared with cdef - Note also that this function receives *pointers* to the data
cdef double complex z = z_in[0] cdef double complex c = c_in[0] cdef int k # the integer we use in the for loop # # TODO: write the Mandelbrot iteration for one point here, # as you would write it in Python. # # Say, use 100 as the maximum number of iterations, and 1000 # as the cutoff for z.real**2 + z.imag**2. # TODO: mandelbrot iteration should go here # Return the answer for this point z_out[0] = z
# The actual ufunc declaration # ---------------------------cdef PyUFuncGenericFunction loop_func[1] cdef char input_output_types[3] cdef void *elementwise_funcs[1] # # # # # # # # Reminder: some pre-made Ufunc loops: ================ PyUfunc_f_f PyUfunc_ff_f PyUfunc_d_d PyUfunc_dd_d ======================================================= float elementwise_func(float input_1) float elementwise_func(float input_1, float input_2) double elementwise_func(double input_1) double elementwise_func(double input_1, double input_2)
179
180
# # # # # # # # # # # # # #
elementwise_func(complex_double *input, complex_double* complex_double) elementwise_func(complex_double *in1, complex_double *in2, complex_double* out) =======================================================
Type codes: NPY_BOOL, NPY_BYTE, NPY_UBYTE, NPY_SHORT, NPY_USHORT, NPY_INT, NPY_UINT, NPY_LONG, NPY_ULONG, NPY_LONGLONG, NPY_ULONGLONG, NPY_FLOAT, NPY_DOUBLE, NPY_LONGDOUBLE, NPY_CFLOAT, NPY_CDOUBLE, NPY_CLONGDOUBLE, NPY_DATETIME, NPY_TIMEDELTA, NPY_OBJECT, NPY_STRING, NPY_UNICODE, NPY_VOID
Some points of note: - Its *NOT* allowed to call any Python functions here. The Ufunc loop runs with the Python Global Interpreter Lock released. Hence, the nogil. - And so all local variables must be declared with cdef - Note also that this function receives *pointers* to the data; the "traditional" solution to passing complex variables around
loop_func[0] = ... TODO: suitable PyUFunc_* ... input_output_types[0] = ... TODO ... ... TODO: fill in rest of input_output_types ... # This thing is passed as the data parameter for the generic # PyUFunc_* loop, to let it know which function it should call. elementwise_funcs[0] = <void*>mandel_single_point # Construct the ufunc:
cdef double complex z = z_in[0] cdef double complex c = c_in[0] cdef int k # the integer we use in the for loop # Straightforward iteration
mandel = PyUFunc_FromFuncAndData( loop_func, elementwise_funcs, input_output_types, 1, # number of supported input types TODO, # number of input args TODO, # number of output args 0, # identity element, never mind this "mandel", # function name "mandel(z, c) -> computes z*z + c", # docstring 0 # unused )
for k in range(100): z = z*z + c if z.real**2 + z.imag**2 > 1000: break # Return the answer for this point z_out[0] = z
Reminder: some pre-made Ufunc loops: PyUfunc_f_f float elementwise_func(float input_1) PyUfunc_ff_f float elementwise_func(float input_1, float input_2) PyUfunc_d_d double elementwise_func(double input_1) PyUfunc_dd_d double elementwise_func(double input_1, double input_2) PyUfunc_D_D elementwise_func(complex_double *input, complex_double* output) PyUfunc_DD_D elementwise_func(complex_double *in1, complex_double *in2, complex_double* out) Type codes:
NPY_BOOL, NPY_BYTE, NPY_UBYTE, NPY_SHORT, NPY_USHORT, NPY_INT, NPY_UINT, NPY_LONG, NPY_ULONG, NPY_LONGLONG, NPY_ULONGLONG, NPY_FLOAT, NPY_DOUBLE, NPY_LONGDOUBLE, NPY_CFLOAT, NPY_CDOUBLE, NPY_CLONGDOUBLE, NPY_DATETIME, NPY_TIMEDELTA, NPY_OBJECT, NPY_STRING, NPY_UNICODE, NPY_VOID
# # # # #
Boilerplate Cython definitions You dont really need to read this part, it just pulls in stuff from the Numpy C headers. ----------------------------------------------------------
cdef extern from "numpy/arrayobject.h": void import_array() ctypedef int npy_intp cdef enum NPY_TYPES: NPY_CDOUBLE cdef extern from "numpy/ufuncobject.h": void import_ufunc() ctypedef void (*PyUFuncGenericFunction)(char**, npy_intp*, npy_intp*, void*) object PyUFunc_FromFuncAndData(PyUFuncGenericFunction* func, void** data, char* types, int ntypes, int nin, int nout, int identity, char* name, char* doc, int c) void PyUFunc_DD_D(char**, npy_intp*, npy_intp*, void*) # Required module initialization # -----------------------------import_array() import_ufunc()
181
182
# The actual ufunc declaration # ---------------------------cdef PyUFuncGenericFunction loop_func[1] cdef char input_output_types[3] cdef void *elementwise_funcs[1] loop_func[0] = PyUFunc_DD_D input_output_types[0] = NPY_CDOUBLE input_output_types[1] = NPY_CDOUBLE input_output_types[2] = NPY_CDOUBLE elementwise_funcs[0] = <void*>mandel_single_point mandel = PyUFunc_FromFuncAndData( loop_func, elementwise_funcs, input_output_types, 1, # number of supported input types 2, # number of input args 1, # number of output args 0, # identity element, never mind this "mandel", # function name "mandel(z, c) -> computes iterated z*z + c", # docstring 0 # unused ) import numpy as np import mandel x = np.linspace(-1.7, 0.6, 1000) y = np.linspace(-1.4, 1.4, 1000) c = x[None,:] + 1j*y[:,None] z = mandel.mandel(c, c) import matplotlib.pyplot as plt plt.imshow(abs(z)**2 < 1000, extent=[-1.7, 0.6, -1.4, 1.4]) plt.gray() plt.show()
Matrix product:
183
184
input_1 shape = (m, n) input_2 shape = (n, p) output shape = (m, p) (m, n), (n, p) -> (m, p)
output += steps[2]; } }
This is called the signature of the generalized ufunc The dimensions on which the g-ufunc acts, are core dimensions
Status in Numpy
g-ufuncs are in Numpy already ... new ones can be created with PyUFunc_FromFuncAndDataAndSignature ... but we dont ship with public g-ufuncs, except for testing, ATM
>>> import numpy.core.umath_tests as ut >>> ut.matrix_multiply.signature (m,n),(n,p)->(m,p) >>> x = np.ones((10, 2, 4)) >>> y = np.ones((10, 4, 5)) >>> ut.matrix_multiply(x, y).shape (10, 2, 5)
the last two dimensions became core dimensions, and are modied as per the signature otherwise, the g-ufunc operates elementwise matrix multiplication this way could be useful for operating on many small matrices at once
TODO: RGBA images consist of 32-bit integers whose bytes are [RR,GG,BB,AA]. Fill x with opaque red [255,0,0,255] Mangle it to (200,200) 32-bit integer array so that PIL accepts it
>>> img = Image.frombuffer("RGBA", (200, 200), data) >>> img.save(test.png)
int m = dimension[1]; /* core dimensions are added after */ int n = dimension[2]; /* the main dimension; order as in */ int p = dimension[3]; /* signature */ int i; for (i = 0; i < dimensions[0]; ++i) { matmul_for_strided_matrices(input_1, input_2, output, strides for each array...); input_1 += steps[0]; input_2 += steps[1];
185
186
x = np.zeros((200, 200, 4), dtype=np.int8) x[:,:,0] = 254 # red x[:,:,3] = 255 # opaque data = x.view(np.int32) # Check that you understand why this is OK! img = Image.frombuffer("RGBA", (200, 200), data) img.save(test.png) # # Modify the original data, and save again. # # It turns out that PIL, which knows next to nothing about Numpy, # happily shares the same data. # x[:,:,1] = 254 img.save(test2.png)
Data type information present Numpy-specic approach; slowly deprecated (but not going away) Not integrated in Python otherwise See Also: Documentation: http://docs.scipy.org/doc/numpy/reference/arrays.interface.html
>>> x = np.array([[1, 2], [3, 4]]) >>> x.__array_interface__ {data: (171694552, False), # memory address of data, is readonly? descr: [(, <i4)], # data type descriptor typestr: <i4, # same, in another form strides: None, # strides; or None if in C-order shape: (2, 2), version: 3, } >>> import Image >>> img = Image.open(test.png) >>> img.__array_interface__ {data: a very long string, shape: (200, 200, 4), typestr: |u1} >>> x = np.asarray(img) >>> x.shape (200, 200, 4) >>> x.dtype dtype(uint8)
187
188
/* Note: if correct interpretation *requires* strides or shape, you need to check flags for what was requested, and raise appropriate errors. The same if the buffer is not readable. */ view->obj = (PyObject*)self; Py_INCREF(self); return 0; } static void myobject_releasebuffer(PyMemoryViewObject *self, Py_buffer *view) { if (view->shape) { free(view->shape); view->shape = NULL; } if (view->strides) { free(view->strides); view->strides = NULL; } } /* * Sample implementation of a custom data type that exposes an array * interface. (And does nothing else :) * * Requires Python >= 3.1 */ /* * Mini-exercises: * * - make the array strided * - change the data type * */ #define PY_SSIZE_T_CLEAN #include <Python.h> #include "structmember.h"
Roundtrips work
>>> z = np.asarray(y) >>> z array([[1, 2], [3, 4]]) >>> x[0,0] = 9 >>> z array([[9, 2], [3, 4]])
static int myobject_getbuffer(PyObject *obj, Py_buffer *view, int flags) { PyMyObjectObject *self = (PyMyObjectObject*)obj; /* Called when something requests that a MyObject-type object provides a buffer interface */ view->buf = self->buffer; view->readonly = 0;
189
190
view->format = "i"; view->len = 4; view->itemsize = sizeof(int); view->ndim = 2; view->shape = malloc(sizeof(Py_ssize_t) * 2); view->shape[0] = 2; view->shape[1] = 2; view->strides = malloc(sizeof(Py_ssize_t) * 2);; view->strides[0] = 2*sizeof(int); view->strides[1] = sizeof(int); view->suboffsets = NULL; /* Note: if correct interpretation *requires* strides or shape, you need to check flags for what was requested, and raise appropriate errors. The same if the buffer is not readable. */ view->obj = (PyObject*)self; Py_INCREF(self); return 0; } static void myobject_releasebuffer(PyMemoryViewObject *self, Py_buffer *view) { if (view->shape) { free(view->shape); view->shape = NULL; } if (view->strides) { free(view->strides); view->strides = NULL; } }
static PyBufferProcs myobject_as_buffer = { (getbufferproc)myobject_getbuffer, (releasebufferproc)myobject_releasebuffer, }; /* * Standard stuff follows */ PyTypeObject PyMyObject_Type; static PyObject * myobject_new(PyTypeObject *subtype, PyObject *args, PyObject *kwds) { PyMyObjectObject *self; static char *kwlist[] = {NULL}; if (!PyArg_ParseTupleAndKeywords(args, kwds, "", kwlist)) { return NULL; } self = (PyMyObjectObject *) PyObject_New(PyMyObjectObject, &PyMyObject_Type); self->buffer[0] = 1; self->buffer[1] = 2; self->buffer[2] = 3;
PyTypeObject PyMyObject_Type = { PyVarObject_HEAD_INIT(NULL, 0) "MyObject", sizeof(PyMyObjectObject), 0, /* methods */ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, &myobject_as_buffer, Py_TPFLAGS_DEFAULT, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, myobject_new, 0, 0, 0, 0, 0, 0, 0, 0, 0, }; struct PyModuleDef moduledef = { PyModuleDef_HEAD_INIT, "myobject", NULL, -1, NULL, NULL,
/* tp_itemsize */ /* /* /* /* /* /* /* /* /* /* /* /* /* /* /* /* /* /* /* /* /* /* /* /* /* /* /* /* /* /* /* /* /* /* /* /* /* /* /* /* /* /* /* tp_dealloc */ tp_print */ tp_getattr */ tp_setattr */ tp_reserved */ tp_repr */ tp_as_number */ tp_as_sequence */ tp_as_mapping */ tp_hash */ tp_call */ tp_str */ tp_getattro */ tp_setattro */ tp_as_buffer */ tp_flags */ tp_doc */ tp_traverse */ tp_clear */ tp_richcompare */ tp_weaklistoffset */ tp_iter */ tp_iternext */ tp_methods */ tp_members */ tp_getset */ tp_base */ tp_dict */ tp_descr_get */ tp_descr_set */ tp_dictoffset */ tp_init */ tp_alloc */ tp_new */ tp_free */ tp_is_gc */ tp_bases */ tp_mro */ tp_cache */ tp_subclasses */ tp_weaklist */ tp_del */ tp_version_tag */
191
192
NULL, NULL, NULL }; PyObject *PyInit_myobject(void) { PyObject *m, *d; if (PyType_Ready(&PyMyObject_Type) < 0) { return NULL; } m = PyModule_Create(&moduledef); d = PyModule_GetDict(m); Py_INCREF(&PyMyObject_Type); PyDict_SetItemString(d, "MyObject", (PyObject *)&PyMyObject_Type); return m; }
matrix convenience?
always 2-D * is the matrix product, not the elementwise one
>>> np.matrix([[1, 0], [0, 1]]) * np.matrix([[1, 2], [3, 4]]) matrix([[1, 2], [3, 4]])
8.5 Summary
Anatomy of the ndarray: data, dtype, strides. Universal functions: elementwise operations, how to make new ones Ndarray subclasses Various buffer interfaces for integration with other tools Recent additions: PEP 3118, generalized ufuncs
Note: .view() has a second meaning: it can make an ndarray an instance of a specialized ndarray subclass
8.6.1 Why
Theres a bug? I dont understand what this is supposed to do? I have this fancy code. Would you like to have it? Id like to help! What can I do?
Mailing lists ( scipy.org/Mailing_Lists ) If youre unsure No replies in a week or so? Just le a bug ticket.
Subscribe to scipy-dev mailing list (subscribers-only) Problem with mailing lists: you get mail * But: you can turn mail delivery off * change your subscription options, at the bottom of http://mail.scipy.org/mailman/listinfo/scipy-dev Send a mail @ scipy-dev mailing list; ask for activation:
To: [email protected] Hi, Id like to edit Numpy/Scipy docstrings. My account is XXXXX Cheers, N. N.
Check the style guide: http://docs.scipy.org/numpy/ Dont be intimidated; to x a small thing, just x it Edit 2. Edit sources and send patches (as for bugs) 3. Complain on the mailing list
0. What are you trying to do? 1. Small code snippet reproducing the bug (if possible) What actually happens What youd expect 2. Platform (Windows / Linux / OSX, 32/64 bits, x86/PPC, ...) 3. Version of Numpy/Scipy
>>> print numpy.__version__
svn/trunk
In case you have old/broken Numpy installations lying around. If unsure, try to remove existing Numpy installations, and reinstall...
196
CHAPTER 9
Debugging code
It is not specic to the scientic Python community, but the strategies that we will employ are tailored to its needs. Prerequisites Numpy IPython nosetests (http://readthedocs.org/docs/nose/en/latest/) line_proler (http://packages.python.org/line_proler/) pyakes (http://pypi.python.org/pypi/pyakes) gdb for the C-debugging part.
Chapters contents Avoiding bugs (page 198) Coding best practices to avoid getting in trouble (page 198) pyakes: fast static analysis (page 199) * Running pyakes on the current edited le (page 199) * A type-as-go spell-checker like integration (page 200) Debugging workow (page 200) Using the Python debugger (page 201) Invoking the debugger (page 201) * Postmortem (page 201) * Step-by-step execution (page 203) * Other ways of starting a debugger (page 204) Debugger commands and interaction (page 205) Debugging segmentation faults using gdb (page 205)
197
198
Brian Kernighan Everyone knows that debugging is twice as hard as writing a program in the rst place. So if youre as clever as you can be when you write it, how will you ever debug it? We all write buggy code. Accept it. Deal with it. Write your code with testing and debugging in mind. Keep It Simple, Stupid (KISS). What is the simplest thing that could possibly work? Dont Repeat Yourself (DRY). Every piece of knowledge must have a single, unambiguous, authoritative representation within a system. Constants, algorithms, etc... Try to limit interdependencies of your code. (Loose Coupling) Give your variables, functions and modules meaningful names (not mathematics names)
(defun pyflakes-thisfile () (interactive) (compile (format "pyflakes %s" (buffer-file-name))) ) (define-minor-mode pyflakes-mode "Toggle pyflakes mode. With no argument, this command toggles the mode. Non-null prefix argument turns on the mode. Null prefix argument turns off the mode." ;; The initial value. nil ;; The indicator for the mode line. " Pyflakes" ;; The minor mode bindings. ( ([f5] . pyflakes-thisfile) ) ) (add-hook python-mode-hook (lambda () (pyflakes-mode t)))
In vim Use the pyakes.vim plugin: 1. download the zip le from http://www.vim.org/scripts/script.php?script_id=2441 2. extract the les in ~/.vim/ftplugin/python 3. make sure your vimrc has letype plugin indent on In emacs Use the ymake mode with pyakes, documented http://www.plope.com/Members/chrism/ymake-mode : add the following to your .emacs le:
(when (load "flymake" t) (defun flymake-pyflakes-init () (let* ((temp-file (flymake-init-create-temp-buffer-copy flymake-create-temp-inplace)) (local-file (file-relative-name temp-file (file-name-directory buffer-file-name)))) (list "pyflakes" (list local-file)))) (add-to-list flymake-allowed-file-name-masks ("\\.py\\" flymake-pyflakes-init))) (add-hook find-file-hook flymake-find-file-hook)
on
In TextMate Menu: TextMate -> Preferences -> Advanced -> Shell variables, add a shell variable:
TM_PYCHECKER=/Library/Frameworks/Python.framework/Versions/Current/bin/pyflakes
Then Ctrl-Shift-V is binded to a pyakes report In vim In your .vimrc (binds F5 to pyakes):
autocmd autocmd autocmd autocmd FileType FileType FileType FileType python let &mp = echo "*** running % ***" ; pyflakes % tex,mp,rst,python imap <Esc>[15~ <C-O>:make!^M tex,mp,rst,python map <Esc>[15~ :make!^M tex,mp,rst,python set autowrite
1. Make it fail reliably. Find a test case that makes the code fail every time. 2. Divide and Conquer. Once you have a failing test case, isolate the failing code. Which module. Which function. Which line of code. => isolate a small reproducible failure: a test case 3. Change one thing at a time and re-run the failing test case. 4. Use the debugger to understand what is going wrong. 5. Take notes and be patient. It may take a while. Note: Once you have gone through this process: isolated a tight piece of code reproducing the bug and x the bug using this piece of code, add the corresponding code to your test suite.
/home/varoquau/dev/scipy-lecture-notes/advanced/debugging_optimizing/index_error.py in index_error 3 def index_error(): 4 lst = list(foobar) ----> 5 print lst[len(lst)] 6 7 if __name__ == __main__: IndexError: list index out of range
In [2]: %debug > /home/varoquau/dev/scipy-lecture-notes/advanced/debugging_optimizing/index_error.py(5)index_erro 4 lst = list(foobar) ----> 5 print lst[len(lst)] 6 ipdb> list 1 """Small snippet to raise an IndexError.""" 2 3 def index_error(): 4 lst = list(foobar) ----> 5 print lst[len(lst)] 6 7 if __name__ == __main__: 8 index_error() 9 ipdb> len(lst) 6 ipdb> print lst[len(lst)-1] r ipdb> quit In [3]:
201
202
Post-mortem debugging without IPython In some situations you cannot use IPython, for instance to debug a script that wants to be called from the command line. In this case, you can call the script with python -m pdb script.py:
$ python -m pdb index_error.py > /home/varoquau/dev/scipy-lecture-notes/advanced/debugging_optimizing/index_error.py(1)<module>() -> """Small snippet to raise an IndexError.""" (Pdb) continue Traceback (most recent call last): File "/usr/lib/python2.6/pdb.py", line 1296, in main pdb._runscript(mainpyfile) File "/usr/lib/python2.6/pdb.py", line 1215, in _runscript self.run(statement) File "/usr/lib/python2.6/bdb.py", line 372, in run exec cmd in globals, locals File "<string>", line 1, in <module> File "index_error.py", line 8, in <module> index_error() File "index_error.py", line 5, in index_error print lst[len(lst)] IndexError: list index out of range Uncaught exception. Entering post mortem debugging Running cont or step will restart the program > /home/varoquau/dev/scipy-lecture-notes/advanced/debugging_optimizing/index_error.py(5)index_error() -> print lst[len(lst)] (Pdb)
Step into code with n(ext) and s(tep): next jumps to the next statement in the current execution context, while step will go across execution contexts, i.e. enable exploring inside function calls:
ipdb> s > /home/varoquau/dev/scipy-lecture-notes/advanced/debugging_optimizing/wiener_filtering.py( 2 34 noisy_img = noisy_img ---> 35 denoised_img = local_mean(noisy_img, size=size) 36 l_var = local_var(noisy_img, size=size)
ipdb> n > /home/varoquau/dev/scipy-lecture-notes/advanced/debugging_optimizing/wiener_filtering.py( 35 denoised_img = local_mean(noisy_img, size=size) ---> 36 l_var = local_var(noisy_img, size=size) 37 for i in range(3):
Step-by-step execution Situation: You believe a bug exists in a module but are not sure where. For instance we are trying to debug wiener_filtering.py. Indeed the code runs, but the ltering does not work well. Run the script with the debugger:
ipdb> n > /home/varoquau/dev/scipy-lecture-notes/advanced/debugging_optimizing/wiener_filtering.py( 36 l_var = local_var(noisy_img, size=size) ---> 37 for i in range(3): 38 res = noisy_img - denoised_img ipdb> print l_var [[5868 5379 5316 ..., 5071 4799 5149] [5013 363 437 ..., 346 262 4355] [5379 410 344 ..., 392 604 3377] ..., [ 435 362 308 ..., 275 198 1632] [ 548 392 290 ..., 248 263 1653] [ 466 789 736 ..., 1835 1725 1940]] ipdb> print l_var.min() 0
Oh dear, nothing but integers, and 0 variation. Here is our bug, we are doing integer arithmetic. Raising exception on numerical errors When we run the wiener_filtering.py le, the following warnings are raised:
In [1]: %run -d wiener_filtering.py In [2]: %run wiener_filtering.py *** Blank or comment Warning: divide by zero encountered in divide *** Blank or comment Warning: divide by zero encountered in divide *** Blank or comment Warning: divide by zero encountered in divide Breakpoint 1 at /home/varoquau/dev/scipy-lecture-notes/advanced/debugging_optimizing/wiener_filtering.py:4 NOTE: Enter c at the ipdb> prompt to start your script. We can turn these warnings in exception, which enables us to do post-mortem debugging on them, and nd > <string>(1)<module>() our problem more quickly:
In [3]: np.seterr(all=raise) Out[3]: {divide: print, invalid: print, over: print, under: ignore} ipdb> n > /home/varoquau/dev/scipy-lecture-notes/advanced/debugging_optimizing/wiener_filtering.py(4)<module>() 3 1---> 4 import numpy as np Other ways of starting a debugger 5 import scipy as sp
Raising an exception as a poor man break point ipdb> b 34 Breakpoint 2 at /home/varoquau/dev/scipy-lecture-notes/advanced/debugging_optimizing/wiener_filtering.py:34 If you nd it tedious to note the line number to set a break point, you can simply raise an exception at the Continue execution to next breakpoint with c(ont(inue)): point that you want to inspect and use ipythons %debug. Note that in this case you cannot step or continue the execution.
ipdb> c Debugging test failures using nosetests > /home/varoquau/dev/scipy-lecture-notes/advanced/debugging_optimizing/wiener_filtering.py(34)iterated_wiener() You can run nosetests pdb to drop in post-mortem debugging on exceptions, and nosetests pdb-failure to 33 """ 2--> 34 noisy_img = noisy_img inspect test failures using the debugger. 35 denoised_img = local_mean(noisy_img, size=size)
203
204
In addition, you can use the IPython interface for the debugger in nose by installing the nose plugin ipdbplugin. You can than pass ipdb and ipdb-failure options to nosetests. Calling the debugger explicitly Insert the following line where you want to drop in the debugger:
import pdb; pdb.set_trace()
365 (gdb)
We get a segfault, and gdb captures it for post-mortem debugging in the C level stack (not the Python call stack). We can debug the C call stack using gdbs commands:
(gdb) up #1 0x004af4f5 in _copy_from_same_shape (dest=<value optimized out>, src=<value optimized out>, myfunc=0x496780 <_strided_byte_copy>, swap=0) at numpy/core/src/multiarray/ctors.c:748 748 myfunc(dit->dataptr, dest->strides[maxaxis],
Warning: When running nosetests, the output is captured, and thus it seems that the debugger does not work. Simply run the nosetests with the -s ag.
Graphical debuggers For stepping through code and inspecting variables, you might nd it more convenient to use a graphical debugger such as winpdb. Alternatively, pudb is a good semi-graphical debugger with a text user interface in the console.
As you can see, right now, we are in the C code of numpy. We would like to know what is the Python code that triggers this segfault, so we go up the stack until we hit the Python execution loop:
(gdb) up #8 0x080ddd23 in call_function (f= Frame 0x85371ec, for file /home/varoquau/usr/lib/python2.6/site-packages/numpy/core/arrayprint at ../Python/ceval.c:3750 3750 ../Python/ceval.c: No such file or directory. in ../Python/ceval.c
(gdb) up #9 PyEval_EvalFrameEx (f= Frame 0x85371ec, for file /home/varoquau/usr/lib/python2.6/site-packages/numpy/core/arrayprint at ../Python/ceval.c:2412 2412 in ../Python/ceval.c (gdb)
Warning: Debugger commands are not Python code You cannot name the variables the way you want. For instance, if in you cannot override the variables in the current frame with the same name: use different names then your local variable when typing code in the debugger.
Once we are in the Python execution loop, we can use our special Python helper function. For instance we can nd the corresponding Python code:
(gdb) pyframe /home/varoquau/usr/lib/python2.6/site-packages/numpy/core/arrayprint.py (158): _leading_trailing (gdb)
(gdb) up ... (gdb) up #34 0x080dc97a in PyEval_EvalFrameEx (f= Frame 0x82f064c, for file segfault.py, line 11, in print_big_array (small_array=<numpy.ndarray 1630 ../Python/ceval.c: No such file or directory. in ../Python/ceval.c (gdb) pyframe segfault.py (12): print_big_array
Thus the segfault happens when printing big_array[-10:]. The reason is simply that big_array has been allocated with its end outside the program memory. Note: For a list of Python-specic commands dened in the gdbinit, read the source of this le.
205
206
Wrap up exercise The following script is well documented and hopefully legible. It seeks to answer a problem of actual interest for numerical computing, but it does not work... Can you debug it? Python source code: to_debug.py
Optimizing code
author Gal Varoquaux Donald Knuth Premature optimization is the root of all evil This chapter deals with strategies to make Python code go faster. Prerequisites line_proler (http://packages.python.org/line_proler/)
Chapters contents Optimization workow (page 208) Proling Python code (page 209) Timeit (page 209) Proler (page 209) Line-proler (page 210) Making code go faster (page 211) Algorithmic optimization (page 211) * Example of the SVD (page 211) Writing faster numerical code (page 212)
207
208
10.2.1 Timeit
In IPython, use timeit (http://docs.python.org/library/timeit.html) to time elementary operations:
In [1]: import numpy as np In [2]: a = np.arange(1000) In [3]: %timeit a ** 2 100000 loops, best of 3: 5.73 us per loop In [4]: %timeit a ** 2.1 1000 loops, best of 3: 154 us per loop In [5]: %timeit a * a 100000 loops, best of 3: 5.56 us per loop
ncalls tottime 1 14.457 1 0.054 1 0.017 54 0.011 2 0.005 6 0.001 6 0.001 14 0.001 19 0.001 1 0.001 1 0.001 107 0.000 7 0.000 7 0.000 172 0.000 1 0.000 29 0.000 35 0.000 35 0.000 21 0.000 41 0.000 28 0.000 1 0.000 ...
percall 14.457 0.054 0.017 0.000 0.002 0.000 0.000 0.000 0.000 0.001 0.001 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000
cumtime 14.479 0.054 0.021 0.011 0.005 0.001 0.001 0.001 0.001 0.008 14.551 0.001 0.004 0.002 0.000 14.551 0.000 0.000 0.001 0.001 0.000 0.000 0.008
percall 14.479 0.054 0.021 0.000 0.002 0.000 0.000 0.000 0.000 0.008 14.551 0.000 0.001 0.000 0.000 14.551 0.000 0.000 0.000 0.000 0.000 0.000 0.008
filename:lineno(function) decomp.py:849(svd) {method random_sample of mtrand.RandomState objects function_base.py:645(asarray_chkfinite) {numpy.core._dotblas.dot} {method any of numpy.ndarray objects} ica.py:195(gprime) ica.py:192(g) {numpy.linalg.lapack_lite.dsyevd} twodim_base.py:204(diag) ica.py:69(_ica_par) {execfile} defmatrix.py:239(__array_finalize__) ica.py:58(_sym_decorrelation) linalg.py:841(eigh) {isinstance} demo.py:1(<module>) numeric.py:180(asarray) defmatrix.py:193(__new__) defmatrix.py:43(asmatrix) defmatrix.py:287(__mul__) {numpy.core.multiarray.zeros} {method transpose of numpy.ndarray objects} ica.py:97(fastica)
Use this to guide your choice between strategies. Note: For long running calls, using %time instead of %timeit; it is less precise but faster
Clearly the svd (in decomp.py) is what takes most of our time, a.k.a. the bottleneck. We have to nd a way to make this step go faster, or to avoid this step (algorithmic optimization). Spending time on the rest of the code is useless.
10.2.2 Proler
Useful when you have a large program to prole, for example the following file:
import numpy as np from scipy import linalg from ica import fastica def test(): data = np.random.random((5000, 100)) u, s, v = linalg.svd(data) pca = np.dot(u[:10, :], data) results = fastica(pca.T, whiten=False) test()
10.2.3 Line-proler
The proler is great: it tells us which function takes most of the time, but not where it is called. For this, we use the line_proler: in the source le, we decorate a few functions that we want to inspect with @profile (no need to import it):
@profile def test(): data = np.random.random((5000, 100)) u, s, v = linalg.svd(data) pca = np.dot(u[:10, :], data) results = fastica(pca.T, whiten=False)
Then we run the script using the kernprof.py program, with switches - and -v:
~ $ kernprof.py -l -v demo.py
Wrote profile results to demo.py.lprof Timer unit: 1e-06 s File: demo.py Function: test at line 5 Total time: 14.2793 s Line # Hits Time Per Hit % Time Line Contents ============================================================== 5 @profile 6 def test(): 7 1 19015 19015.0 0.1 data = np.random.random((5000, 100)) 8 1 14242163 14242163.0 99.7 u, s, v = linalg.svd(data)
209
210
9 10
1 1
10282 7799
10282.0 7799.0
0.1 0.1
The SVD is taking all the time. We need to optimise this line.
note: we need global a in the timeit so that it work, as it is assigning to a, and thus considers it as a local variable. Be easy on the memory: use views, and not copies Copying big arrays is as costly as making simple numerical operations on them:
In [1]: a = np.zeros(1e7) In [2]: %timeit a.copy() 10 loops, best of 3: 124 ms per loop In [3]: %timeit a + 1 10 loops, best of 3: 112 ms per loop
Beware of cache effects Memory access is cheaper when it is grouped: accessing a big array in a continuous way is much faster than random access. This implies amongst other things that smaller strides are faster (see CPU cache effects (page 175)):
In [1]: c = np.zeros((1e4, 1e4), order=C) In [2]: %timeit c.sum(axis=0) 1 loops, best of 3: 3.89 s per loop In [3]: %timeit c.sum(axis=1) 1 loops, best of 3: 188 ms per loop In [4]: c.strides Out[4]: (80000, 8)
Real incomplete SVDs, e.g. computing only the rst 10 eigenvectors, can be computed with arpack, available in scipy.sparse.linalg.eigsh. Computational linear algebra For certain algorithms, many of the bottlenecks will be linear algebra computations. In this case, using the right function to solve the right problem is key. For instance, an eigenvalue problem with a symmetric matrix is easier to solve than with a general matrix. Also, most often, you can avoid inverting a matrix and use a less costly (and more numerically stable) operation. Know your computational linear algebra. When in doubt, explore scipy.linalg, and use %timeit to try out different alternatives on your data.
This is the reason why Fortran ordering or C ordering may make a big difference on operations. Using numexpr can be useful to automatically optimize code for such effects. Use compiled code The last resort, once you are sure that all the high-level optimizations have been explored, is to transfer the hot spots, i.e. the few lines or functions in which most of the time is spent, to compiled code. For compiled code, the preferred option is to use Cython: it is easy to transform exiting Python code in compiled code, and with a good use of the numpy support yields efcient code on numpy arrays, for instance by unrolling loops.
211
212
Warning: For all the above: prole and time your choices. Dont base your optimization on theoretical considerations.
11.1 Introduction
(dense) matrix is: mathematical object data structure for storing a 2D array of values important features: memory allocated once for all items usually a contiguous chunk, think NumPy ndarray fast access to individual items (*)
11.1.4 Prerequisites
recent versions of numpy scipy matplotlib (optional) ipython (the enhancements come handy)
warning for NumPy users: the multiplication with * is the matrix multiplication (dot product) not part of NumPy! * passing a sparse matrix object to NumPy functions expecting ndarray/matrix does not work
... attributes: mtx.A - same as mtx.toarray() mtx.T - transpose (same as mtx.transpose()) mtx.H - Hermitian (conjugate) transpose mtx.real - real part of complex matrix mtx.imag - imaginary part of complex matrix mtx.size - the number of nonzeros (same as self.getnnz()) mtx.shape - the number of rows and columns (tuple) data usually stored in NumPy arrays
Examples
create some DIA matrices:
>>> data = np.array([[1, 2, 3, 4]]).repeat(3, axis=0) >>> data array([[1, 2, 3, 4], [1, 2, 3, 4], [1, 2, 3, 4]]) >>> offsets = np.array([0, -1, 2]) >>> mtx = sps.dia_matrix((data, offsets), shape=(4, 4)) >>> mtx <4x4 sparse matrix of type <type numpy.int32> with 9 stored elements (3 diagonals) in DIAgonal format> >>> mtx.todense() matrix([[1, 0, 3, 0], [1, 2, 0, 4], [0, 2, 3, 0], [0, 0, 3, 4]]) >>> data = np.arange(12).reshape((3, 4)) + 1 >>> data array([[ 1, 2, 3, 4], [ 5, 6, 7, 8], [ 9, 10, 11, 12]]) >>> mtx = sps.dia_matrix((data, offsets), shape=(4, 4)) >>> mtx.data array([[ 1, 2, 3, 4], [ 5, 6, 7, 8], [ 9, 10, 11, 12]]) >>> mtx.offsets array([ 0, -1, 2]) >>> print mtx (0, 0) 1 (1, 1) 2 (2, 2) 3 (3, 3) 4 (1, 0) 5 (2, 1) 6 (3, 2) 7 (0, 2) 11 (1, 3) 12 >>> mtx.todense() matrix([[ 1, 0, 11, 0], [ 5, 2, 0, 12], [ 0, 6, 3, 0], [ 0, 0, 7, 4]])
matrix-vector multiplication
217
218
>>> vec = np.ones((4,)) >>> vec array([ 1., 1., 1., 1.]) >>> mtx * vec array([ 12., 19., 9., 11.]) >>> mtx.toarray() * vec array([[ 1., 0., 11., 0.], [ 5., 2., 0., 12.], [ 0., 6., 3., 0.], [ 0., 0., 7., 4.]])
matrix([[ 0., 0., [ 0., 0., [ 0., 0., [ 0., 0., >>> mtx.toarray() array([[ 0., 0., [ 0., 0., [ 0., 0., [ 0., 0.,
more slicing and indexing: List of Lists Format (LIL) row-based linked list each row is a Python list (sorted) of column indices of non-zero elements rows stored in a NumPy array (dtype=np.object) non-zero values data stored analogously efcient for constructing sparse matrices incrementally constructor accepts: dense matrix (array) sparse matrix shape tuple (create empty matrix) exible slicing, changing sparsity structure is efcient slow arithmetics, slow column slicing due to being row-based use: when sparsity pattern is not known apriori or changes example: reading a sparse matrix from a text le
>>> mtx = sps.lil_matrix([[0, 1, 2, 0], [3, 0, 1, 0], [1, 0, 0, 1]]) >>> mtx.todense() matrix([[0, 1, 2, 0], [3, 0, 1, 0], [1, 0, 0, 1]]) >>> print mtx (0, 1) 1 (0, 2) 2 (1, 0) 3 (1, 2) 1 (2, 0) 1 (2, 3) 1 >>> mtx[:2, :] <2x4 sparse matrix of type <type numpy.int32> with 4 stored elements in LInked List format> >>> mtx[:2, :].todense() matrix([[0, 1, 2, 0], [3, 0, 1, 0]]) >>> mtx[1:2, [0,2]].todense() matrix([[3, 1]]) >>> mtx.todense() matrix([[0, 1, 2, 0], [3, 0, 1, 0], [1, 0, 0, 1]])
Examples
create an empty LIL matrix:
>>> mtx = sps.lil_matrix((4, 5))
Dictionary of Keys Format (DOK) subclass of Python dict keys are (row, column) index tuples (no duplicate entries allowed) values are corresponding non-zero values efcient for constructing sparse matrices incrementally constructor accepts: dense matrix (array) sparse matrix shape tuple (create empty matrix) efcient O(1) access to individual elements exible slicing, changing sparsity structure is efcient can be efciently converted to a coo_matrix once constructed slow arithmetics (for loops with dict.iteritems()) use: when sparsity pattern is not known apriori or changes
219
220
Examples
create a DOK matrix element by element:
>>> mtx = sps.dok_matrix((5, 5), dtype=np.float64) >>> mtx <5x5 sparse matrix of type <type numpy.float64> with 0 stored elements in Dictionary Of Keys format> >>> for ir in range(5): >>> for ic in range(5): >>> mtx[ir, ic] = 1.0 * (ir != ic) >>> mtx <5x5 sparse matrix of type <type numpy.float64> with 25 stored elements in Dictionary Of Keys format> >>> mtx.todense() matrix([[ 0., 1., 1., 1., 1.], [ 1., 0., 1., 1., 1.], [ 1., 1., 0., 1., 1.], [ 1., 1., 1., 0., 1.], [ 1., 1., 1., 1., 0.]])
shape tuple (create empty matrix) (data, ij) tuple very fast conversion to and from CSR/CSC formats fast matrix * vector (sparsetools) fast and easy item-wise operations manipulate data array directly (fast NumPy machinery) no slicing, no arithmetics (directly) use: facilitates fast conversion among sparse formats when converting to other format (usually CSR or CSC), duplicate entries are summed together * facilitates efcient construction of nite element matrices
Examples
create empty COO matrix:
>>> mtx = sps.coo_matrix((3, 4), dtype=np.int8) >>> mtx.todense() matrix([[0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0]], dtype=int8)
Coordinate Format (COO) also known as the ijv or triplet format three NumPy arrays: row, col, data data[i] is value at (row[i], col[i]) position permits duplicate entries subclass of _data_matrix (sparse matrix classes with data attribute) fast format for constructing sparse matrices constructor accepts: dense matrix (array) sparse matrix 11.2. Storage Schemes 221
no slicing...:
>>> mtx[2, 3] -----------------------------------------------------------Traceback (most recent call last): File "<ipython console>", line 1, in <module> TypeError: coo_matrix object is unsubscriptable
222
Compressed Sparse Row Format (CSR) row oriented three NumPy arrays: indices, indptr, data * indices is array of column indices * data is array of corresponding nonzero values * indptr points to row starts in indices and data * length is n_row + 1, last item = number of values = length of both indices and data * nonzero values of the i-th row are data[indptr[i]:indptr[i+1]] with column indices indices[indptr[i]:indptr[i+1]] * item (i, j) can be accessed as data[indptr[i]+k], where k is position of j in indices[indptr[i]:indptr[i+1]] subclass of _cs_matrix (common CSR/CSC functionality) * subclass of _data_matrix (sparse matrix classes with data attribute) fast matrix vector products and other arithmetics (sparsetools) constructor accepts: dense matrix (array) sparse matrix shape tuple (create empty matrix) (data, ij) tuple (data, indices, indptr) tuple efcient row slicing, row-oriented operations slow column slicing, expensive changes to the sparsity structure use: actual computations (most linear solvers support this format)
>>> mtx.data array([1, 2, 3, 4, 5, 6]) >>> mtx.indices array([0, 2, 2, 0, 1, 2]) >>> mtx.indptr array([0, 2, 3, 6])
Compressed Sparse Column Format (CSC) column oriented three NumPy arrays: indices, indptr, data * indices is array of row indices * data is array of corresponding nonzero values * indptr points to column starts in indices and data * length is n_col + 1, last item = number of values = length of both indices and data * nonzero values of the i-th column are data[indptr[i]:indptr[i+1]] with row indices indices[indptr[i]:indptr[i+1]] * item (i, j) can be accessed as data[indptr[j]+k], where k is position of i in indices[indptr[j]:indptr[j+1]] subclass of _cs_matrix (common CSR/CSC functionality) * subclass of _data_matrix (sparse matrix classes with data attribute) fast matrix vector products and other arithmetics (sparsetools) constructor accepts: dense matrix (array) sparse matrix shape tuple (create empty matrix) (data, ij) tuple (data, indices, indptr) tuple efcient column slicing, column-oriented operations slow row slicing, expensive changes to the sparsity structure use: actual computations (most linear solvers support this format)
Examples
create empty CSR matrix:
>>> mtx = sps.csr_matrix((3, 4), dtype=np.int8) >>> mtx.todense() matrix([[0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0]], dtype=int8)
223
224
Examples
create empty CSC matrix:
>>> mtx = sps.csc_matrix((3, 4), dtype=np.int8) >>> mtx.todense() matrix([[0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0]], dtype=int8)
shape tuple (create empty matrix) (data, ij) tuple (data, indices, indptr) tuple many arithmetic operations considerably more efcient than CSR for sparse matrices with dense submatrices use: like CSR vector-valued nite element discretizations
Examples
create empty BSR matrix with (1, 1) block size (like CSR...):
>>> mtx = sps.bsr_matrix((3, 4), dtype=np.int8) >>> mtx <3x4 sparse matrix of type <type numpy.int8> with 0 stored elements (blocksize = 1x1) in Block Sparse Row format> >>> mtx.todense() matrix([[0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0]], dtype=int8)
a bug? create using (data, ij) tuple with (1, 1) block size (like CSR...):
>>> row = np.array([0, 0, 1, 2, 2, 2]) >>> col = np.array([0, 2, 2, 0, 1, 2]) >>> data = np.array([1, 2, 3, 4, 5, 6]) >>> mtx = sps.bsr_matrix((data, (row, col)), shape=(3, 3)) >>> mtx <3x3 sparse matrix of type <type numpy.int32> with 6 stored elements (blocksize = 1x1) in Block Sparse Row format> >>> mtx.todense() matrix([[1, 0, 2], [0, 0, 3], [4, 5, 6]]) >>> mtx.data array([[[1]], [[2]], [[3]], [[4]], [[5]],
Block Compressed Row Format (BSR) basically a CSR with dense sub-matrices of xed shape instead of scalar items block size (R, C) must evenly divide the shape of the matrix (M, N) three NumPy arrays: indices, indptr, data * indices is array of column indices for each block * data is array of corresponding nonzero values of shape (nnz, R, C) * ... subclass of _cs_matrix (common CSR/CSC functionality) * subclass of _data_matrix (sparse matrix classes with data attribute) fast matrix vector products and other arithmetics (sparsetools) constructor accepts: dense matrix (array) sparse matrix 11.2. Storage Schemes 225
226
create using (data, indices, indptr) tuple with (2, 2) block size:
>>> indptr = np.array([0, 2, 3, 6]) >>> indices = np.array([0, 2, 2, 0, 1, 2]) >>> data = np.array([1, 2, 3, 4, 5, 6]).repeat(4).reshape(6, 2, 2) >>> mtx = sps.bsr_matrix((data, indices, indptr), shape=(6, 6)) >>> mtx.todense() matrix([[1, 1, 0, 0, 2, 2], [1, 1, 0, 0, 2, 2], [0, 0, 0, 0, 3, 3], [0, 0, 0, 0, 3, 3], [4, 4, 5, 5, 6, 6], [4, 4, 5, 5, 6, 6]]) >>> data array([[[1, 1], [1, 1]], [[2, 2], [2, 2]], [[3, 3], [3, 3]], [[4, 4], [4, 4]], [[5, 5], [5, 5]], [[6, 6], [6, 6]]])
11.2.3 Summary
matrix * vector sparsetools via CSR python sparsetools sparsetools sparsetools sparsetools
note has data array, specialized arithmetics via CSR, incremental construction O(1) item access, incremental construction has data array, facilitates fast conversion has data array, fast row-wise ops has data array, fast column-wise ops has data array, specialized
both superlu and umfpack can be used (if the latter is installed) as follows: prepare a linear system:
>>> import numpy as np >>> import scipy.sparse as sps >>> mtx = sps.spdiags([[1, 2, 3, 4, 5], [6, 5, 8, 9, 10]], [0, 1], 5, 5) >>> mtx.todense() matrix([[ 1, 5, 0, 0, 0], [ 0, 2, 8, 0, 0], [ 0, 0, 3, 9, 0], [ 0, 0, 0, 4, 10], [ 0, 0, 0, 0, 5]]) >>> rhs = np.array([1, 2, 3, 4, 5])
227
228
bicgstab (BIConjugate Gradient STABilized) cg (Conjugate Gradient) - symmetric positive denite matrices only cgs (Conjugate Gradient Squared) gmres (Generalized Minimal RESidual) minres (MINimum RESidual) qmr (Quasi-Minimal Residual) Common Parameters mandatory: A [{sparse matrix, dense matrix, LinearOperator}] The N-by-N matrix of the linear system. b [{array, matrix}] Right hand side of the linear system. Has shape (N,) or (N,1). optional: x0 [{array, matrix}] Starting guess for the solution. tol [oat] Relative tolerance to achieve before terminating. maxiter [integer] Maximum number of iterations. Iteration will stop after maxiter steps even if the specied tolerance has not been achieved. M [{sparse matrix, dense matrix, LinearOperator}] Preconditioner for A. The preconditioner should approximate the inverse of A. Effective preconditioning dramatically improves the rate of convergence, which implies that fewer iterations are needed to reach a given error tolerance. callback [function] User-supplied function to call after each iteration. It is called as callback(xk), where xk is the current solution vector. LinearOperator Class
from scipy.sparse.linalg.interface import LinearOperator
""" Construct a 1000x1000 lil_matrix and add some values to it, convert it to CSR format and solve A x = b for x:and solve a linear system with a direct solver. """ import numpy as np import scipy.sparse as sps from matplotlib import pyplot as plt from scipy.sparse.linalg.dsolve import linsolve rand = np.random.rand mtx = sps.lil_matrix((1000, 1000), dtype=np.float64) mtx[0, :100] = rand(100) mtx[1, 100:200] = mtx[0, :100] mtx.setdiag(rand(1000)) plt.clf() plt.spy(mtx, marker=., markersize=2) plt.show() mtx = mtx.tocsr() rhs = rand(1000) x = linsolve.spsolve(mtx, rhs) print rezidual:, np.linalg.norm(mtx * x - rhs)
common interface for performing matrix vector products useful abstraction that enables using dense and sparse matrices within the solvers, as well as matrix-free solutions has shape and matvec() (+ some optional parameters) example:
>>> import numpy as np >>> from scipy.sparse.linalg import LinearOperator >>> def mv(v): ... return np.array([2*v[0], 3*v[1]]) ... >>> A = LinearOperator((2, 2), matvec=mv) >>> A <2x2 LinearOperator with unspecified dtype> >>> A.matvec(np.ones(2)) array([ 2., 3.]) >>> A * np.ones(2) array([ 2., 3.])
examples/direct_solve.py
229
230
A Few Notes on Preconditioning problem specic often hard to develop if not sure, try ILU available in dsolve as spilu()
#plot the eigenvectors import pylab pylab.figure(figsize=(9,9)) for i in range(K): pylab.subplot(3, 3, i+1) pylab.title(Eigenvector %d % i) pylab.pcolor(V[:,i].reshape(N,N)) pylab.axis(equal) pylab.axis(off) pylab.show()
231
232
http://pysparse.sourceforge.net/
authors Emmanuelle Gouillart, Gal Varoquaux Image = 2-D numerical array (or 3-D: CT, MRI, 2D + time; 4-D, ...) Here, image == Numpy array np.array Tools used in this tutorial: numpy: basic array manipulation scipy: scipy.ndimage submodule dedicated to image processing (n-dimensional images). See http://docs.scipy.org/doc/scipy/reference/tutorial/ndimage.html a few examples use specialized toolkits working with np.array: Scikit Image scikit-learn Common tasks in image processing: Input/Output, displaying images Basic manipulations: cropping, ipping, rotating, ... Image ltering: denoising, sharpening Image segmentation: labeling pixels corresponding to different objects Classication ... More powerful and complete modules: OpenCV (Python bindings) CellProler ITK with Python bindings many more...
233
234
Chapters contents Opening and writing to image les (page 235) Displaying images (page 236) Basic manipulations (page 238) Statistical information (page 239) Geometrical transformations (page 240) Image ltering (page 240) Blurring/smoothing (page 241) Sharpening (page 241) Denoising (page 242) Mathematical morphology (page 245) Feature extraction (page 249) Edge detection (page 249) Segmentation (page 251) Measuring objects properties (page 255)
Need to know the shape and dtype of the image (how to separate data bytes). For large data, use np.memmap for memory mapping:
>>> lena_memmap = np.memmap(lena.raw, dtype=np.int64, shape=(512, 512))
(data are read from the le, and not loaded into memory) Working on a list of image les
>>> from glob import glob >>> filelist = glob(pattern*.png) >>> filelist.sort()
plt.subplot(131) plt.imshow(l, cmap=plt.cm.gray) plt.subplot(132) plt.imshow(l, cmap=plt.cm.gray, vmin=30, vmax=200) plt.axis(off) plt.subplot(133) plt.imshow(l, cmap=plt.cm.gray) plt.contour(l, [60, 211]) plt.axis(off) plt.subplots_adjust(wspace=0, hspace=0., top=0.99, bottom=0.01, left=0.05, right=0.99) plt.show()
dtype is uint8 for 8-bit images (0-255) Opening raw les (camera, 3-D images)
>>> l.tofile(lena.raw) # Create raw file >>> lena_from_raw = np.fromfile(lena.raw, dtype=np.int64) >>> lena_from_raw.shape
235
236
0 100 200 300 400 500 0 100 200 300 400 500
3-D visualization: Mayavi See 3D plotting with Mayavi (page 264) and Volumetric data (page 267). Image plane widgets Isosurfaces ...
Other packages sometimes use graphical toolkits for visualization (GTK, Qt):
>>> lena = scipy.lena() >>> lena[0, 40] 166 >>> # Slicing >>> lena[10:13, 20:23] array([[158, 156, 157], [157, 155, 155], [157, 157, 158]]) >>> lena[100:120] = 255 >>>
237
238
lx, ly = lena.shape X, Y = np.ogrid[0:lx, 0:ly] mask = (X - lx/2)**2 + (Y - ly/2)**2 > lx*ly/4 # Masks lena[mask] = 0 # Fancy indexing lena[range(400), range(400)] = 255
import numpy as np import scipy import matplotlib.pyplot as plt lena = scipy.lena() lena[10:13, 20:23] lena[100:120] = 255 lx, ly = lena.shape X, Y = np.ogrid[0:lx, 0:ly] mask = (X - lx/2)**2 + (Y - ly/2)**2 > lx*ly/4 lena[mask] = 0 lena[range(400), range(400)] = 255 plt.figure(figsize=(3,3)) plt.axes([0, 0, 1, 1]) plt.imshow(lena, cmap=plt.cm.gray) plt.axis(off)
import numpy as np import scipy from scipy import ndimage import matplotlib.pyplot as plt lena = scipy.lena() lx, ly = lena.shape # Copping crop_lena = lena[lx/4:-lx/4, ly/4:-ly/4] # up <-> down flip flip_ud_lena = np.flipud(lena) # rotation rotate_lena = ndimage.rotate(lena, 45) rotate_lena_noreshape = ndimage.rotate(lena, 45, reshape=False) plt.figure(figsize=(12.5, 2.5))
plt.subplot(151) plt.imshow(lena, cmap=plt.cm.gray) plt.axis(off) plt.subplot(152) plt.imshow(crop_lena, cmap=plt.cm.gray) plt.axis(off) plt.subplot(153) plt.imshow(flip_ud_lena, cmap=plt.cm.gray) plt.axis(off) plt.subplot(154) plt.imshow(rotate_lena, cmap=plt.cm.gray) plt.axis(off) plt.subplot(155) plt.imshow(rotate_lena_noreshape, cmap=plt.cm.gray) plt.axis(off) plt.subplots_adjust(wspace=0.02, hspace=0.3, top=1, bottom=0.1, left=0, right=1)
np.histogram
239
240
12.4.1 Blurring/smoothing
Gaussian lter from scipy.ndimage:
>>> lena = scipy.lena() >>> blurred_lena = ndimage.gaussian_filter(lena, sigma=3) >>> very_blurred = ndimage.gaussian_filter(lena, sigma=5)
Uniform lter
>>> local_mean = ndimage.uniform_filter(lena, size=11) import numpy as np import scipy from scipy import ndimage import matplotlib.pyplot as plt lena = scipy.lena() blurred_lena = ndimage.gaussian_filter(lena, sigma=3) very_blurred = ndimage.gaussian_filter(lena, sigma=5) local_mean = ndimage.uniform_filter(lena, size=11) plt.figure(figsize=(9, 3)) plt.subplot(131) plt.imshow(blurred_lena, cmap=plt.cm.gray) plt.axis(off) plt.subplot(132) plt.imshow(very_blurred, cmap=plt.cm.gray) plt.axis(off) plt.subplot(133) plt.imshow(local_mean, cmap=plt.cm.gray) plt.axis(off) plt.subplots_adjust(wspace=0, hspace=0., top=0.99, bottom=0.01, left=0.01, right=0.99)
12.4.3 Denoising
Noisy lena:
>>> l = scipy.lena() >>> l = l[230:310, 210:350] >>> noisy = l + 0.4*l.std()*np.random.random(l.shape)
12.4.2 Sharpening
Sharpen a blurred image:
A Gaussian lter smoothes the noise out... and the edges as well:
>>> gauss_denoised = ndimage.gaussian_filter(noisy, 2)
Most local linear isotropic lters blur the image (ndimage.uniform_filter) A median lter preserves better the edges:
241
242
>>> med_denoised = ndimage.median_filter(noisy, 3) import numpy as np import scipy from scipy import ndimage import matplotlib.pyplot as plt l = scipy.lena() l = l[230:290, 220:320] noisy = l + 0.4*l.std()*np.random.random(l.shape) gauss_denoised = ndimage.gaussian_filter(noisy, 2) med_denoised = ndimage.median_filter(noisy, 3)
plt.figure(figsize=(16, 5)) plt.subplot(141) plt.imshow(im, interpolation=nearest) plt.axis(off) plt.title(Original image, fontsize=20) plt.subplot(142) plt.imshow(im_noise, interpolation=nearest, vmin=0, vmax=5) plt.axis(off) plt.title(Noisy image, fontsize=20) plt.subplot(143) plt.imshow(im_med, interpolation=nearest, vmin=0, vmax=5) plt.axis(off) plt.title(Median filter, fontsize=20) plt.subplot(144) plt.imshow(np.abs(im - im_med), cmap=plt.cm.hot, interpolation=nearest) plt.axis(off) plt.title(Error, fontsize=20)
plt.figure(figsize=(12,2.8)) plt.subplot(131) plt.imshow(noisy, cmap=plt.cm.gray, vmin=40, vmax=220) plt.axis(off) plt.title(noisy, fontsize=20) plt.subplot(132) plt.imshow(gauss_denoised, cmap=plt.cm.gray, vmin=40, vmax=220) plt.axis(off) plt.title(Gaussian filter, fontsize=20) plt.subplot(133) plt.imshow(med_denoised, cmap=plt.cm.gray, vmin=40, vmax=220) plt.axis(off) plt.title(Median filter, fontsize=20) plt.subplots_adjust(wspace=0.02, hspace=0.02, top=0.9, bottom=0, left=0, right=1)
Original image
Noisy image
Median filter
Error
noisy
Gaussian filter
Median filter
Other rank lter: ndimage.maximum_filter, ndimage.percentile_filter Other local non-lienear lters: Wiener (scipy.signal.wiener), etc. Non-local lters Total-variation (TV) denoising. Find a new image so that the total-variation of the image (integral of the norm L1 of the gradient) is minimized, while being close to the measured image: Median lter: better result for straight boundaries (low curvature):
>>> >>> >>> >>> >>> im = np.zeros((20, 20)) im[5:-5, 5:-5] = 1 im = ndimage.distance_transform_bf(im) im_noise = im + 0.2*np.random.randn(*im.shape) im_med = ndimage.median_filter(im_noise, 3) >>> >>> >>> >>> >>> # from scikits.image.filter import tv_denoise from tv_denoise import tv_denoise tv_denoised = tv_denoise(noisy, weight=10) # More denoising (to the expense of fidelity to data) tv_denoised = tv_denoise(noisy, weight=50)
import numpy as np import scipy from scipy import ndimage import matplotlib.pyplot as plt im = np.zeros((20, 20)) im[5:-5, 5:-5] = 1 im = ndimage.distance_transform_bf(im) im_noise = im + 0.2*np.random.randn(*im.shape) im_med = ndimage.median_filter(im_noise, 3)
The total variation lter tv_denoise is available in the scikits.image, (doc: http://scikitsimage.org/docs/dev/api/scikits.image.lter.html#tv-denoise), but for convenience weve shipped it as a standalone module with this tutorial.
import numpy as np import scipy from scipy import ndimage import matplotlib.pyplot as plt # from scikits.image.filter import tv_denoise from tv_denoise import tv_denoise l = scipy.lena() l = l[230:290, 220:320]
243
244
plt.figure(figsize=(12,2.8)) plt.subplot(131) plt.imshow(noisy, cmap=plt.cm.gray, vmin=40, vmax=220) plt.axis(off) plt.title(noisy, fontsize=20) plt.subplot(132) plt.imshow(tv_denoised, cmap=plt.cm.gray, vmin=40, vmax=220) plt.axis(off) plt.title(TV denoising, fontsize=20) tv_denoised = tv_denoise(noisy, weight=50) plt.subplot(133) plt.imshow(tv_denoised, cmap=plt.cm.gray, vmin=40, vmax=220) plt.axis(off) plt.title((more) TV denoising, fontsize=20) plt.subplots_adjust(wspace=0.02, hspace=0.02, top=0.9, bottom=0, left=0, right=1)
Erosion = minimum lter. Replace the value of a pixel by the minimal value covered by the structuring element.:
>>> a = np.zeros((7,7), dtype=np.int) >>> a[1:6, 2:5] = 1 >>> a array([[0, 0, 0, 0, 0, 0, 0], [0, 0, 1, 1, 1, 0, 0], [0, 0, 1, 1, 1, 0, 0], [0, 0, 1, 1, 1, 0, 0], [0, 0, 1, 1, 1, 0, 0], [0, 0, 1, 1, 1, 0, 0], [0, 0, 0, 0, 0, 0, 0]]) >>> ndimage.binary_erosion(a).astype(a.dtype) array([[0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 1, 0, 0, 0], [0, 0, 0, 1, 0, 0, 0], [0, 0, 0, 1, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0]]) >>> #Erosion removes objects smaller than the structure >>> ndimage.binary_erosion(a, structure=np.ones((5,5))).astype(a.dtype) array([[0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0]])
noisy
TV denoising
(more) TV denoising
0.],
245
246
[ 0., 0., 0., 0., 0.], [ 0., 0., 1., 0., 0.], [ 0., 0., 0., 0., 0.], [ 0., 0., 0., 0., 0.]]) >>> ndimage.binary_dilation(a).astype(a.dtype) array([[ 0., 0., 0., 0., 0.], [ 0., 0., 1., 0., 0.], [ 0., 1., 1., 1., 0.], [ 0., 0., 1., 0., 0.], [ 0., 0., 0., 0., 0.]])
import numpy as np from scipy import ndimage import matplotlib.pyplot as plt im = np.zeros((64, 64)) np.random.seed(2) x, y = (63*np.random.random((2, 8))).astype(np.int) im[x, y] = np.arange(8) bigger_points = ndimage.grey_dilation(im, size=(5, 5), structure=np.ones((5, 5))) square = np.zeros((16, 16)) square[4:-4, 4:-4] = 1 dist = ndimage.distance_transform_bf(square) dilate_dist = ndimage.grey_dilation(dist, size=(3, 3), \ structure=np.ones((3, 3))) plt.figure(figsize=(12.5, 3)) plt.subplot(141) plt.imshow(im, interpolation=nearest, cmap=plt.cm.spectral) plt.axis(off) plt.subplot(142) plt.imshow(bigger_points, interpolation=nearest, cmap=plt.cm.spectral) plt.axis(off) plt.subplot(143) plt.imshow(dist, interpolation=nearest, cmap=plt.cm.spectral) plt.axis(off) plt.subplot(144) plt.imshow(dilate_dist, interpolation=nearest, cmap=plt.cm.spectral) plt.axis(off) plt.subplots_adjust(wspace=0, hspace=0.02, top=0.99, bottom=0.01, left=0.01, right=0.99)
import numpy as np from scipy import ndimage import matplotlib.pyplot as plt square = np.zeros((32, 32)) square[10:-10, 10:-10] = 1 np.random.seed(2) x, y = (32*np.random.random((2, 20))).astype(np.int) square[x, y] = 1 open_square = ndimage.binary_opening(square) eroded_square = ndimage.binary_erosion(square) reconstruction = ndimage.binary_propagation(eroded_square, mask=square)
247
248
plt.figure(figsize=(9.5, 3)) plt.subplot(131) plt.imshow(square, cmap=plt.cm.gray, interpolation=nearest) plt.axis(off) plt.subplot(132) plt.imshow(open_square, cmap=plt.cm.gray, interpolation=nearest) plt.axis(off) plt.subplot(133) plt.imshow(reconstruction, cmap=plt.cm.gray, interpolation=nearest) plt.axis(off) plt.subplots_adjust(wspace=0, hspace=0.02, top=0.99, bottom=0.01, left=0.01, right=0.99)
plt.figure(figsize=(16, 5)) plt.subplot(141) plt.imshow(im, cmap=plt.cm.gray) plt.axis(off) plt.title(square, fontsize=20) plt.subplot(142) plt.imshow(sx) plt.axis(off) plt.title(Sobel (x direction), fontsize=20) plt.subplot(143) plt.imshow(sob) plt.axis(off) plt.title(Sobel filter, fontsize=20) im += 0.07*np.random.random(im.shape) sx = ndimage.sobel(im, axis=0, mode=constant) sy = ndimage.sobel(im, axis=1, mode=constant) sob = np.hypot(sx, sy) plt.subplot(144) plt.imshow(sob) plt.axis(off) plt.title(Sobel for noisy image, fontsize=20)
Closing: dilation + erosion Many other mathematical morphology operations: hit and miss transform, tophat, etc.
plt.subplots_adjust(wspace=0.02, hspace=0.02, top=1, bottom=0, left=0, right=0.9)
square
Sobel (x direction)
Sobel filter
Canny lter The Canny lter is available in the scikits.image (doc), but for convenience weve shipped it as a standalone module with this tutorial.
>>> >>> >>> >>> >>> #from scikits.image.filter import canny #or use module shipped with tutorial im += 0.1*np.random.random(im.shape) edges = canny(im, 1, 0.4, 0.2) # not enough smoothing edges = canny(im, 3, 0.3, 0.2) # better parameters
import numpy as np from scipy import ndimage import matplotlib.pyplot as plt #from scikits.image.filter import canny from image_source_canny import canny im = np.zeros((256, 256)) im[64:-64, 64:-64] = 1 im = ndimage.rotate(im, 15, mode=constant)
249
250
im = ndimage.gaussian_filter(im, 8) im += 0.1*np.random.random(im.shape) edges = canny(im, 1, 0.4, 0.2) plt.figure(figsize=(12, 4)) plt.subplot(131) plt.imshow(im, cmap=plt.cm.gray) plt.axis(off) plt.subplot(132) plt.imshow(edges, cmap=plt.cm.gray) plt.axis(off)
import numpy as np from scipy import ndimage import matplotlib.pyplot as plt np.random.seed(1) n = 10 l = 256 im = np.zeros((l, l)) points = l*np.random.random((2, n**2)) im[(points[0]).astype(np.int), (points[1]).astype(np.int)] = 1 im = ndimage.gaussian_filter(im, sigma=l/(4.*n)) mask = (im > im.mean()).astype(np.float) mask += 0.1 * im
edges = canny(im, 3, 0.3, 0.2) plt.subplot(133) plt.imshow(edges, cmap=plt.cm.gray) plt.axis(off) plt.subplots_adjust(wspace=0.02, hspace=0.02, top=1, bottom=0, left=0, right=1)
img = mask + 0.2*np.random.randn(*mask.shape) hist, bin_edges = np.histogram(img, bins=60) bin_centers = 0.5*(bin_edges[:-1] + bin_edges[1:]) binary_img = img > 0.5 plt.figure(figsize=(11,4)) plt.subplot(131) plt.imshow(img) plt.axis(off) plt.subplot(132) plt.plot(bin_centers, hist, lw=2) plt.axvline(0.5, color=r, ls=--, lw=2) plt.text(0.57, 0.8, histogram, fontsize=20, transform = plt.gca().transAxes) plt.yticks([]) plt.subplot(133) plt.imshow(binary_img, cmap=plt.cm.gray, interpolation=nearest) plt.axis(off)
12.5.2 Segmentation
Histogram-based segmentation (no spatial information)
>>> >>> >>> >>> >>> >>> >>> >>> >>> >>> >>> >>> >>> >>> >>> >>> >>> >>> n = 10 l = 256 im = np.zeros((l, l)) np.random.seed(1) points = l*np.random.random((2, n**2)) im[(points[0]).astype(np.int), (points[1]).astype(np.int)] = 1 im = ndimage.gaussian_filter(im, sigma=l/(4.*n)) # mask = (im > im.mean()).astype(np.float) # mask += 0.1 * im # img = mask + 0.2*np.random.randn(*mask.shape) # hist, bin_edges = np.histogram(img, bins=60) bin_centers = 0.5*(bin_edges[:-1] + bin_edges[1:]) # binary_img = img > 0.5
histogram
1.0
0.5 0.0
0.5
1.0
1.5
2.0
251
252
>>> classif = GMM(n_components=2, cvtype=full) >>> classif.fit(img.reshape((img.size, 1))) GMM(cvtype=full, n_components=2) >>> >>> classif.means array([[ 0.9353155 ], [-0.02966039]]) >>> np.sqrt(classif.covars).ravel() array([ 0.35074631, 0.28225327]) >>> classif.weights array([ 0.40989799, 0.59010201]) >>> threshold = np.mean(classif.means) >>> binary_img = img > threshold
plt.subplot(141) plt.imshow(binary_img[:l, :l], cmap=plt.cm.gray) plt.axis(off) plt.subplot(142) plt.imshow(open_img[:l, :l], cmap=plt.cm.gray) plt.axis(off) plt.subplot(143) plt.imshow(close_img[:l, :l], cmap=plt.cm.gray) plt.axis(off) plt.subplot(144) plt.imshow(mask[:l, :l], cmap=plt.cm.gray) plt.contour(close_img[:l, :l], [0.5], linewidths=2, colors=r) plt.axis(off) plt.subplots_adjust(wspace=0.02, hspace=0.3, top=1, bottom=0.1, left=0, right=1) # Better than opening and closing: use reconstruction eroded_img = ndimage.binary_erosion(binary_img) reconstruct_img = ndimage.binary_propagation(eroded_img, mask=binary_img) tmp = np.logical_not(reconstruct_img) eroded_tmp = ndimage.binary_erosion(tmp) reconstruct_final = np.logical_not(ndimage.binary_propagation(eroded_tmp, mask=tmp)) """ plt.subplot(141) plt.imshow(binary_img[:l, :l], cmap=plt.cm.gray) plt.axis(off) plt.subplot(142) plt.imshow(eroded_img[:l, :l], cmap=plt.cm.gray) plt.axis(off) plt.subplot(143) plt.imshow(reconstruct_img[:l, :l], cmap=plt.cm.gray) plt.axis(off) plt.subplot(144) plt.imshow(mask[:l, :l], cmap=plt.cm.gray) plt.contour(reconstruct_final[:l, :l], [0.5], lw=4) plt.axis(off) """
import numpy as np from scipy import ndimage import matplotlib.pyplot as plt from scikits.learn.mixture import GMM np.random.seed(1) n = 10 l = 256 im = np.zeros((l, l)) points = l*np.random.random((2, n**2)) im[(points[0]).astype(np.int), (points[1]).astype(np.int)] = 1 im = ndimage.gaussian_filter(im, sigma=l/(4.*n)) mask = (im > im.mean()).astype(np.float)
Exercise Check that reconstruction operations (erosion + propagation) produce a better result than opening/closing:
>>> eroded_img = ndimage.binary_erosion(binary_img) >>> reconstruct_img = ndimage.binary_propagation(eroded_img, >>> mask=binary_img) >>> tmp = np.logical_not(reconstruct_img) >>> eroded_tmp = ndimage.binary_erosion(tmp) >>> reconstruct_final = >>> np.logical_not(ndimage.binary_propagation(eroded_tmp, mask=tmp)) >>> np.abs(mask - close_img).mean() 0.014678955078125 >>> np.abs(mask - reconstruct_final).mean() 0.0042572021484375
img = mask + 0.3*np.random.randn(*mask.shape) binary_img = img > 0.5 # Remove small white regions open_img = ndimage.binary_opening(binary_img) # Remove small black hole close_img = ndimage.binary_closing(open_img) plt.figure(figsize=(12, 3)) l = 128
Exercise Check how a rst denoising step (median lter, total variation) modies the histogram, and check that the resulting histogram-based segmentation is more accurate.
253
254
n = 10 l = 256 im = np.zeros((l, l)) points = l*np.random.random((2, n**2)) im[(points[0]).astype(np.int), (points[1]).astype(np.int)] = 1 im = ndimage.gaussian_filter(im, sigma=l/(4.*n)) mask = im > im.mean()
radius1, radius2, radius3, radius4 = 16, 14, 15, 14 circle1 circle2 circle3 circle4 = = = = (x (x (x (x center1[0])**2 center2[0])**2 center3[0])**2 center4[0])**2 + + + + (y (y (y (y center1[1])**2 center2[1])**2 center3[1])**2 center4[1])**2 < < < < radius1**2 radius2**2 radius3**2 radius4**2
# 4 circles img = circle1 + circle2 + circle3 + circle4 mask = img.astype(bool) img = img.astype(float) img += 1 + 0.2*np.random.randn(*img.shape) # Convert the image into a graph with the value of the gradient on the # edges. graph = image.img_to_graph(img, mask=mask) # Take a decreasing function of the gradient: we take it weakly # dependant from the gradient the segmentation is close to a voronoi graph.data = np.exp(-graph.data/graph.data.std()) labels = spectral_clustering(graph, k=4, mode=arpack) label_im = -np.ones(mask.shape) label_im[mask] = labels
label_im, nb_labels = ndimage.label(mask) plt.figure(figsize=(9,3)) plt.subplot(131) plt.imshow(im) plt.axis(off) plt.subplot(132) plt.imshow(mask, cmap=plt.cm.gray) plt.axis(off) plt.subplot(133) plt.imshow(label_im, cmap=plt.cm.spectral) plt.axis(off) plt.subplots_adjust(wspace=0.02, hspace=0.02, top=1, bottom=0, left=0, right=1)
255
256
>>> sizes = ndimage.sum(mask, label_im, range(nb_labels + 1)) >>> mean_vals = ndimage.sum(im, label_im, range(1, nb_labels + 1))
plt.figure(figsize=(6 ,3)) plt.subplot(121) plt.imshow(label_im, cmap=plt.cm.spectral) plt.axis(off) plt.subplot(122) plt.imshow(label_clean, vmax=nb_labels, cmap=plt.cm.spectral) plt.axis(off) plt.subplots_adjust(wspace=0.01, hspace=0.01, top=1, bottom=0, left=0, right=1)
257
258
Other spatial measures: ndimage.center_of_mass, ndimage.maximum_position, etc. Can be used outside the limited scope of segmentation applications. Example: block mean:
>>> >>> >>> >>> >>> >>> >>> l = scipy.lena() sx, sy = l.shape X, Y = np.ogrid[0:sx, 0:sy] regions = sy/6 * (X/4) + Y/6 # note that we use broadcasting block_mean = ndimage.mean(l, labels=regions, index=np.arange(1, regions.max() +1)) block_mean.shape = (sx/4, sy/6)
import numpy as np import scipy from scipy import ndimage import matplotlib.pyplot as plt l = scipy.lena() sx, sy = l.shape X, Y = np.ogrid[0:sx, 0:sy] regions = sy/6 * (X/4) + Y/6 block_mean = ndimage.mean(l, labels=regions, index=np.arange(1, regions.max() +1)) block_mean.shape = (sx/4, sy/6) plt.figure(figsize=(5,5)) plt.imshow(block_mean, cmap=plt.cm.gray) plt.axis(off)
When regions are regular blocks, it is more efcient to use stride tricks (Example: fake dimensions with strides (page 173)). Non-regularly-spaced blocks: radial mean:
>>> rbin = (20* r/r.max()).astype(np.int) >>> radial_mean = ndimage.mean(l, labels=rbin, index=np.arange(1, rbin.max() +1)) import numpy as np import scipy from scipy import ndimage import matplotlib.pyplot as plt l = scipy.lena() sx, sy = l.shape X, Y = np.ogrid[0:sx, 0:sy]
r = np.hypot(X - sx/2, Y - sy/2) rbin = (20* r/r.max()).astype(np.int) radial_mean = ndimage.mean(l, labels=rbin, index=np.arange(1, rbin.max() +1)) plt.figure(figsize=(5,5)) plt.axes([0, 0, 1, 1]) plt.imshow(rbin, cmap=plt.cm.spectral) plt.axis(off)
259
260
>>> mask = im > im.mean() >>> >>> granulo = granulometry(mask, sizes=np.arange(2, 19, 4)) import numpy as np from scipy import ndimage import matplotlib.pyplot as plt def disk_structure(n): struct = np.zeros((2 * n + 1, 2 * n + 1)) x, y = np.indices((2 * n + 1, 2 * n + 1)) mask = (x - n)**2 + (y - n)**2 <= n**2 struct[mask] = 1 return struct.astype(np.bool)
def granulometry(data, sizes=None): s = max(data.shape) if sizes == None: sizes = range(1, s/2, 2) granulo = [ndimage.binary_opening(data, \ structure=disk_structure(n)).sum() for n in sizes] return granulo
np.random.seed(1) n = 10 l = 256 im = np.zeros((l, l)) points = l*np.random.random((2, n**2)) im[(points[0]).astype(np.int), (points[1]).astype(np.int)] = 1 im = ndimage.gaussian_filter(im, sigma=l/(4.*n))
Other measures Correlation function, Fourier/wavelet spectrum, etc. One example with mathematical morphology: granulometry (http://en.wikipedia.org/wiki/Granulometry_%28morphology%29)
>>> ... ... ... ... ... ... >>> >>> ... ... ... ... ... ... ... >>> >>> >>> >>> >>> >>> >>> >>> >>> def disk_structure(n): struct = np.zeros((2 * n + 1, 2 * n + 1)) x, y = np.indices((2 * n + 1, 2 * n + 1)) mask = (x - n)**2 + (y - n)**2 <= n**2 struct[mask] = 1 return struct.astype(np.bool)
mask = im > im.mean() granulo = granulometry(mask, sizes=np.arange(2, 19, 4)) plt.figure(figsize=(6, 2.2)) plt.subplot(121) plt.imshow(mask, cmap=plt.cm.gray) opened = ndimage.binary_opening(mask, structure=disk_structure(10)) opened_more = ndimage.binary_opening(mask, structure=disk_structure(14)) plt.contour(opened, [0.5], colors=b, linewidths=2) plt.contour(opened_more, [0.5], colors=r, linewidths=2) plt.axis(off) plt.subplot(122) plt.plot(np.arange(2, 19, 4), granulo, ok, ms=8)
def granulometry(data, sizes=None): s = max(data.shape) if sizes == None: sizes = range(1, s/2, 2) granulo = [ndimage.binary_opening(data, \ structure=disk_structure(n)).sum() for n in sizes] return granulo
np.random.seed(1) n = 10 l = 256 im = np.zeros((l, l)) points = l*np.random.random((2, n**2)) im[(points[0]).astype(np.int), (points[1]).astype(np.int)] = 1 im = ndimage.gaussian_filter(im, sigma=l/(4.*n))
261
262
CHAPTER 13
np.mgrid[-10:10:100j, -10:10:100j] creates an x,y grid, going from -10 to 10, with 100 steps in each directions.
263
264
In [9]: x, y = np.mgrid[-10:10:100j, -10:10:100j] In [10]: r = np.sqrt(x**2 + y**2) In [11]: z = np.sin(r)/r In [12]: mlab.surf(z, warp_scale=auto) Out[12]: <enthought.mayavi.modules.surface.Surface object at 0xcdb98fc>
13.2.2 Lines
In [5]: mlab.clf() In [6]: t = np.linspace(0, 20, 200) In [7]: mlab.plot3d(np.sin(t), np.cos(t), 0.1*t, t) Out[7]: <enthought.mayavi.modules.surface.Surface object at 0xcc3e1dc>
Note: A surface is dened by points connected to form triangles or polygones. In mlab.func and mlab.mesh, the 13.2. 3D plotting functions 265 13.2. 3D plotting functions 266
connectivity is implicity given by the layout of the arrays. See also mlab.triangular_mesh. Our data is often more than points and values: it needs some connectivity information
Interactive image plane widgets: drag to change the visualized plane. This function works with a regular orthogonal grid:
267
268
Example docstring: mlab.mesh Plots a surface using grid-spaced data supplied as 2D arrays. Function signatures:
mesh(x, y, z, ...)
In [8]: mlab.mesh(x, y, z, extent=[0, 1, 0, 1, 0, 1], ...: representation=wireframe, line_width=1, color=(0.5, 0.5, 0.5)) Out[8]: <enthought.mayavi.modules.surface.Surface object at 0xdd6a71c>
x, y, z are 2D arrays, all of the same shape, giving the positions of the vertices of the surface. The connectivity between these points is implied by the connectivity on the arrays. For simple structures (such as orthogonal grids) prefer the surf function, as it will create more efcient data structures. Keyword arguments: color the color of the vtk object. Overides the colormap, if any, when specied. This is specied as a triplet of oat ranging from 0 to 1, eg (1, 1, 1) for white. colormap type of colormap to use. extent [xmin, xmax, ymin, ymax, zmin, zmax] Default is the x, y, z arrays extents. Use this to change the extent of the object created. gure Figure to populate. line_width The with of the lines, if any used. Must be a oat. Default: 2.0 mask boolean mask array to suppress some data points. mask_points If supplied, only one out of mask_points data point is displayed. This option is usefull to reduce the number of points displayed on large datasets Must be an integer or None. mode the mode of the glyphs. Must be 2darrow or 2dcircle or 2dcross or 2ddash or 2ddiamond or 2dhooked_arrow or 2dsquare or 2dthick_arrow or 2dthick_cross or 2dtriangle or 2dvertex or arrow or cone or cube or cylinder or point or sphere. Default: sphere name the name of the vtk object created. representation the representation type used for the surface. Must be surface or wireframe or points or mesh or fancymesh. Default: surface resolution The resolution of the glyph created. For spheres, for instance, this is the number of divisions along theta and phi. Must be an integer. Default: 8 scalars optional scalar data. scale_factor scale factor of the glyphs used to represent the vertices, in fancy_mesh mode. Must be a oat. Default: 0.05 scale_mode the scaling mode for the glyphs (vector, scalar, or none). transparent make the opacity of the actor depend on the scalar. tube_radius radius of the tubes used to represent the lines, in mesh mode. If None, simple lines are used. tube_sides number of sides of the tubes used to represent the lines. Must be an integer. Default: 6 vmax vmax is used to scale the colormap If None, the max of the data will be used vmin vmin is used to scale the colormap If None, the min of the data will be used Example:
In [1]: import numpy as np In [2]: r, theta = np.mgrid[0:10, -np.pi:np.pi:10j] In [3]: x = r*np.cos(theta) In [4]: y = r*np.sin(theta) In [5]: z = np.sin(r)/r In [6]: from enthought.mayavi import mlab In [7]: mlab.mesh(x, y, z, colormap=gist_earth, extent=[0, 1, 0, 1, 0, 1]) Out[7]: <enthought.mayavi.modules.surface.Surface object at 0xde6f08c>
13.3.3 Decorations
Different items can be added to the gure to carry extra information, such as a colorbar or a title.
In [9]: mlab.colorbar(Out[7], orientation=vertical) Out[9]: <tvtk_classes.scalar_bar_actor.ScalarBarActor object at 0xd897f8c> In [10]: mlab.title(polar mesh) Out[10]: <enthought.mayavi.modules.text.Text object at 0xd8ed38c> In [11]: mlab.outline(Out[7]) Out[11]: <enthought.mayavi.modules.outline.Outline object at 0xdd21b6c> In [12]: mlab.axes(Out[7]) Out[12]: <enthought.mayavi.modules.axes.Axes object at 0xd2e4bcc>
269
270
author Fabian Pedregosa Warning: extent: If we specied extents for a plotting object, mlab.outline and mlab.axes dont get them by default. Objectives 1. 2. 3. 4. 5. Evaluate expressions with arbitrary precision. Perform algebraic manipulations on symbolic expressions. Perform basic calculus tasks (limits, differentiation and integration) with symbolic expressions. Solve polynomial and transcendental equations. Solve some differential equations.
13.4 Interaction
The quicket way to create beautiful visualization with Mayavi is probably to interactivly tweak the various settings. Click on the Mayavi button in the scene, and you can control properties of objects with dialogs.
What is SymPy? SymPy is a Python library for symbolic mathematics. It aims become a full featured computer algebra system that can compete directly with commercial alternatives (Mathematica, Maple) while keeping the code as simple as possible in order to be comprehensible and easily extensible. SymPy is written entirely in Python and does not require any external libraries. Sympy documentation and packages for installation can be found on http://sympy.org/
To nd out what code can be used to program these changes, click on the red button as you modify those properties, and it will generate the corresponding lines of code.
13.4. Interaction
271
272
Chapters contents First Steps with SymPy (page 273) Using SymPy as a calculator (page 273) Exercises (page 274) Symbols (page 274) Algebraic manipulations (page 274) Expand (page 274) Simplify (page 274) Exercises (page 275) Calculus (page 275) Limits (page 275) Differentiation (page 275) Series expansion (page 276) Exercises (page 276) Integration (page 276) Exercises (page 276) Equation solving (page 276) Exercises (page 277) Linear Algebra (page 277) Matrices (page 277) Differential Equations (page 278) Exercises (page 278)
14.1.2 Exercises
1. Calculate 2 with 100 decimals.
14.1.3 Symbols
In contrast to other Computer Algebra Systems, in SymPy you have to declare symbolic variables explicitly:
>>> from sympy import * >>> x = Symbol(x) >>> y = Symbol(y)
Symbols can now be manipulated using some of python operators: +, -, *, ** (arithmetic), &, |, ~ , >>, << (boolean).
14.2.1 Expand
Use this to expand an algebraic expression. It will try to denest powers and multiplications:
In [23]: expand((x+y)**3) Out[23]: 3*x*y**2 + 3*y*x**2 + x**3 + y**3
SymPy uses mpmath in the background, which makes it possible to perform computations using arbitraryprecision arithmetic. That way, some special constants, like e, pi, oo (Innity), are treated as symbols and can be evaluated with arbitrary precision:
>>> pi**2 pi**2 >>> pi.evalf() 3.14159265358979 >>> (pi+exp(1)).evalf() 5.85987448204884
14.2.2 Simplify
Use simplify if you would like to transform an expression into a simpler form:
In [19]: simplify((x+x*y)/x) Out[19]: 1 + y
as you see, evalf evaluates the expression to a oating-point number. There is also a class representing mathematical innity, called oo:
273
274
Simplication is a somewhat vague term, and more precises alternatives to simplify exists: powsimp (simplication of exponents), trigsimp (for trigonometric expressions) , logcombine, radsimp, together.
14.2.3 Exercises
1. Calculate the expanded form of (x + y)6 . 2. Simplify the trigonometric expression sin(x) / cos(x)
>>> 1 >>> 1 +
14.3 Calculus
14.3.1 Limits
Limits are easy to use in SymPy, they follow the syntax limit(function, variable, point), so to compute the limit of f(x) as x -> 0, you would issue limit(f, x, 0):
>>> limit(sin(x)/x, x, 0) 1
14.3.4 Exercises
1. Calculate lim x > 0, sin(x)/x 2. Calulate the derivative of log(x) for x.
14.3.5 Integration
SymPy has support for indenite and denite integration of transcendental elementary and special functions via integrate() facility, which uses powerful extended Risch-Norman algorithm and some heuristics and pattern matching. You can integrate elementary functions:
>>> integrate(6*x**5, x) x**6 >>> integrate(sin(x), x) -cos(x) >>> integrate(log(x), x) -x + x*log(x) >>> integrate(2*x + sinh(x), x) cosh(x) + x**2
14.3.2 Differentiation
You can differentiate any SymPy expression using diff(func, var). Examples:
>>> diff(sin(x), x) cos(x) >>> diff(sin(2*x), x) 2*cos(2*x) >>> diff(tan(x), x) 1 + tan(x)**2
14.3.6 Exercises
14.3. Calculus
275
276
As you can see it takes as rst argument an expression that is supposed to be equaled to 0. It is able to solve a large part of polynomial equations, and is also capable of solving multiple equations with respect to multiple variables giving a tuple as second argument:
In [8]: solve([x + 5*y - 2, -3*x + 6*y - 15], [x, y]) Out[8]: {y: 1, x: -3}
Another alternative in the case of polynomial equations is factor. factor returns the polynomial factorized into irreducible terms, and is capable of computing the factorization over various domains:
In [10]: f = x**4 - 3*x**2 + 1 In [11]: factor(f) Out[11]: (1 + x - x**2)*(1 - x - x**2) In [12]: factor(f, modulus=5) Out[12]: (2 + x)**2*(2 - x)**2
SymPy is also able to solve boolean equations, that is, to decide if a certain boolean expression is satisable or not. For this, we use the function satisable:
In [13]: satisfiable(x & y) Out[13]: {x: True, y: True}
Keyword arguments can be given to this function in order to help if nd the best possible resolution system. For example, if you know that it is a separable equations, you can use keyword hint=separable to force dsolve to resolve it as a separable equation. In [6]: dsolve(sin(x)*cos(f(x)) + cos(x)*sin(f(x))*f(x).diff(x), f(x), hint=separable) Out[6]: -log(1 sin(f(x))**2)/2 == C1 + log(1 - sin(x)**2)/2
This tells us that (x & y) is True whenever x and y are both True. If an expression cannot be true, i.e. no values of its arguments can make the expression True, it will return False:
In [14]: satisfiable(x & ~x) Out[14]: False
14.5.3 Exercises
1. Solve the Bernoulli differential equation x*f(x).diff(x) + f(x) - f(x)**2 Warning: TODO: correct this equation and convert to math directive! 2. Solve the same equation using hint=Bernoulli. What do you observe ?
14.4.1 Exercises
1. Solve the system of equations x + y = 2, 2 x + y = 0 2. Are there boolean values x, y that make (~x | y) & (~y | x) true?
>>> A**2
277
278
Chapters contents Loading an example dataset (page 280) Learning and Predicting (page 281) Supervised learning (page 281) k-Nearest neighbors classier (page 281) Support vector machines (SVMs) for classication (page 282) Clustering: grouping observations together (page 283) K-means clustering (page 283) Dimension Reduction with Principal Component Analysis (page 284) Putting it all together : face recognition with Support Vector Machines (page 285)
author Fabian Pedregosa Machine learning is a rapidly-growing eld with several machine learning frameworks available for Python:
First we will load some data to play with. The data we will use is a very simple ower database known as the Iris dataset. We have 150 observations of the iris ower specifying some of its characteristics: sepal length, sepal width, petal length and petal width together with its subtype: Iris Setosa, Iris Versicolour, Iris Virginica. To load the dataset into a Python object:
>>> from scikits.learn import datasets >>> iris = datasets.load_iris()
This data is stored in the .data member, which is a (n_samples, n_features) array.
>>> iris.data.shape (150, 4)
It is made of 150 observations of irises, each described by the 4 features mentioned earlier. The information about the class of each observation is stored in the target attribute of the dataset. This is an integer 1D array of length n_samples:
>>> iris.target.shape (150,) >>> import numpy as np >>> np.unique(iris.target) [0, 1, 2]
279
280
The digits dataset is made of 1797 images, where each one is a 8x8 pixel image representing a hand-written digits
>>> digits = datasets.load_digits() >>> digits.images.shape (1797, 8, 8) >>> import pylab as pl >>> pl.imshow(digits.images[0], cmap=pl.cm.gray_r) <matplotlib.image.AxesImage object at ...>
To use this dataset with the scikit, we transform each 8x8 image in a feature vector of length 64
>>> data = digits.images.reshape((digits.images.shape[0], -1))
Training set and testing set When experimenting with learning algorithm, it is important not to test the prediction of an estimator on the data used to t the estimator.
Once we have learned from the data, we can access the parameters of the model:
>>> clf.coef_ ...
And it can be used to predict the most likely outcome on unseen data:
>>> clf.predict([[ 5.0, array([0], dtype=int32) 3.6, 1.3, 0.25]])
>>> from scikits.learn import svm >>> svc = svm.SVC(kernel=linear) >>> svc.fit(iris.data, iris.target)
281
282
There are several support vector machine implementations in scikit-learn. The most used ones are svm.SVC, svm.NuSVC and svm.LinearSVC. Excercise Try classifying the digits dataset with svm.SVC. Leave out the last 10% and test prediction performance on these observations.
K-means (3 clusters)
K-means (8 clusters)
Using kernels Classes are not always separable by a hyper-plane, thus it would be desirable to a build decision function that is not linear but that may be for instance polynomial or exponential: Linear kernel Polynomial kernel RBF kernel (Radial Basis Function)
Clustering can be seen as a way of choosing a small number of observations from the information. For instance, this can be used to posterize an image (conversion of a continuous gradation of tone to several regions of fewer tones):
>>> >>> >>> >>> >>> >>> >>> >>> >>> import scipy as sp lena = sp.lena() X = lena.reshape((-1, 1)) # We need an (n_sample, n_feature) array k_means = cluster.KMeans(k=5) k_means.fit(X) values = k_means.cluster_centers_.squeeze() labels = k_means.labels_ lena_compressed = np.choose(labels, values) lena_compressed.shape = lena.shape
>>> svc = svm.SVC(kernel=linear)= svm.SVC(kernel=poly, = svm.SVC(kernel=rbf) >>> svc >>> svc ... degree=3) >>> # gamma: inverse of size of >>> # degree: polynomial degree radial kernel >>> # Exercise Which of the kernels noted above has a better prediction performance on the digits dataset ? Raw image K-means quantization
The cloud of points spanned by the observations above is very at in one direction, so that one feature can almost be exactly computed using the 2 other. PCA nds the directions in which the data is not at 15.3. Clustering: grouping observations together 283 15.3. Clustering: grouping observations together 284
When used to transform data, PCA can reduce the dimensionality of the data by projecting on a principal subspace. Warning: Depending on your version of scikit-learn PCA will be in module decomposition or pca.
>>> from scikits.learn import decomposition >>> pca = decomposition.PCA(n_components=2) >>> pca.fit(iris.data) PCA(copy=True, n_components=2, whiten=False) >>> X = pca.transform(iris.data)
""" Stripped-down version of the face recognition example by Olivier Grisel http://scikit-learn.sourceforge.net/dev/auto_examples/applications/face_recognition.html ## original shape of images: 50, 37 """ import numpy as np from scikits.learn import cross_val, datasets, decomposition, svm
PCA is not just useful for visualization of high dimensional datasets. It can also be used as a preprocessing step to help speed up supervised methods that are not computationally efcient with high dimensions.
15.4 Putting it all together : face recognition with Support Vector Machines
An example showcasing face recognition using Principal Component Analysis for dimension reduction and Support Vector Machines for classication.
# .. # .. load data .. lfw_people = datasets.fetch_lfw_people(min_faces_per_person=70, resize=0.4) faces = np.reshape(lfw_people.data, (lfw_people.target.shape[0], -1)) train, test = iter(cross_val.StratifiedKFold(lfw_people.target, k=4)).next() X_train, X_test = faces[train], faces[test] y_train, y_test = lfw_people.target[train], lfw_people.target[test] # .. # .. dimension reduction .. pca = decomposition.RandomizedPCA(n_components=150, whiten=True) pca.fit(X_train) X_train_pca = pca.transform(X_train) X_test_pca = pca.transform(X_test) # .. # .. classification .. clf = svm.SVC(C=5., gamma=0.001) clf.fit(X_train_pca, y_train) # .. # .. predict on new images .. for i in range(1, 10): print lfw_people.target_names[clf.predict(X_test_pca[i])[0]] _ = pl.imshow(X_test[i].reshape(50, 37), cmap=pl.cm.gray) _ = raw_input()
Full code: faces.py PythonScientific.pdf PythonScientific-simple.pdf 15.4. Putting it all together : face recognition with Support Vector Machines 285 15.4. Putting it all together : face recognition with Support Vector Machines 286
Index Bibliography
D
diff, 275, 278 differentiation, 275 dsolve, 278 [Mallet09] Mallet, C. and Bretar, F. Full-Waveform Topographic Lidar: State-of-the-Art. ISPRS Journal of Photogrammetry and Remote Sensing 64(1), pp.1-16, January 2009 http://dx.doi.org/10.1016/j.isprsjprs.2008.09.007
E
equations algebraic, 276 differential, 278
I
integration, 276
M
Matrix, 277
P
Python Enhancement Proposals PEP 255, 151 PEP 3118, 186 PEP 3129, 161 PEP 318, 153, 161 PEP 342, 151 PEP 343, 161 PEP 380, 153 PEP 8, 156
S
solve, 276
287
288