The Python Tutorial
The Python Tutorial
The Python interpreter and the extensive standard library are freely available in source
or binary form for all major platforms from the Python Web site, https://www.python.org/,
and may be freely distributed. The same site also contains distributions of and pointers
to many free third party Python modules, programs and tools, and additional
documentation.
The Python interpreter is easily extended with new functions and data types
implemented in C or C++ (or other languages callable from C). Python is also suitable
as an extension language for customizable applications.
This tutorial introduces the reader informally to the basic concepts and features of the
Python language and system. It helps to have a Python interpreter handy for hands-on
experience, but all examples are self-contained, so the tutorial can be read off-line as
well.
For a description of standard objects and modules, see The Python Standard
Library. The Python Language Reference gives a more formal definition of the
language. To write extensions in C or C++, read Extending and Embedding the Python
Interpreter and Python/C API Reference Manual. There are also several books covering
Python in depth.
This tutorial does not attempt to be comprehensive and cover every single feature, or
even every commonly used feature. Instead, it introduces many of Python’s most
noteworthy features, and will give you a good idea of the language’s flavor and style.
After reading it, you will be able to read and write Python modules and programs, and
you will be ready to learn more about the various Python library modules described
in The Python Standard Library.
If you’re a professional software developer, you may have to work with several
C/C++/Java libraries but find the usual write/compile/test/re-compile cycle is too slow.
Perhaps you’re writing a test suite for such a library and find writing the testing code a
tedious task. Or maybe you’ve written a program that could use an extension language,
and you don’t want to design and implement a whole new language for your application.
You could write a Unix shell script or Windows batch files for some of these tasks, but
shell scripts are best at moving around files and changing text data, not well-suited for
GUI applications or games. You could write a C/C++/Java program, but it can take a lot
of development time to get even a first-draft program. Python is simpler to use, available
on Windows, Mac OS X, and Unix operating systems, and will help you get the job done
more quickly.
Python is simple to use, but it is a real programming language, offering much more
structure and support for large programs than shell scripts or batch files can offer. On
the other hand, Python also offers much more error checking than C, and, being a very-
high-level language, it has high-level data types built in, such as flexible arrays and
dictionaries. Because of its more general data types Python is applicable to a much
larger problem domain than Awk or even Perl, yet many things are at least as easy in
Python as in those languages.
Python allows you to split your program into modules that can be reused in other Python
programs. It comes with a large collection of standard modules that you can use as the
basis of your programs — or as examples to start learning to program in Python. Some
of these modules provide things like file I/O, system calls, sockets, and even interfaces
to graphical user interface toolkits like Tk.
Python is an interpreted language, which can save you considerable time during
program development because no compilation and linking is necessary. The interpreter
can be used interactively, which makes it easy to experiment with features of the
language, to write throw-away programs, or to test functions during bottom-up program
development. It is also a handy desk calculator.
the high-level data types allow you to express complex operations in a single statement;
statement grouping is done by indentation instead of beginning and ending brackets;
no variable or argument declarations are necessary.
Python is extensible: if you know how to program in C it is easy to add a new built-in
function or module to the interpreter, either to perform critical operations at maximum
speed, or to link Python programs to libraries that may only be available in binary form
(such as a vendor-specific graphics library). Once you are really hooked, you can link
the Python interpreter into an application written in C and use it as an extension or
command language for that application.
By the way, the language is named after the BBC show “Monty Python’s Flying Circus”
and has nothing to do with reptiles. Making references to Monty Python skits in
documentation is not only allowed, it is encouraged!
Now that you are all excited about Python, you’ll want to examine it in some more detail.
Since the best way to learn a language is to use it, the tutorial invites you to play with
the Python interpreter as you read.
In the next chapter, the mechanics of using the interpreter are explained. This is rather
mundane information, but essential for trying out the examples shown later.
The rest of the tutorial introduces various features of the Python language and system
through examples, beginning with simple expressions, statements and data types,
through functions and modules, and finally touching upon advanced concepts like
exceptions and user-defined classes.
python3.4
to the shell. [1] Since the choice of the directory where the interpreter lives is an
installation option, other places are possible; check with your local Python guru or
system administrator. (E.g., /usr/local/python is a popular alternative location.)
set path=%path%;C:\python34
The interpreter’s line-editing features include interactive editing, history substitution and
code completion on systems that support readline. Perhaps the quickest check to see
whether command line editing is supported is typing Control-P to the first Python prompt
you get. If it beeps, you have command line editing; see Appendix Interactive Input
Editing and History Substitution for an introduction to the keys. If nothing appears to
happen, or if ^P is echoed, command line editing isn’t available; you’ll only be able to
use backspace to remove characters from the current line.
The interpreter operates somewhat like the Unix shell: when called with standard input
connected to a tty device, it reads and executes commands interactively; when called
with a file name argument or with a file as standard input, it reads and executes
a script from that file.
A second way of starting the interpreter is python -c command [arg] ..., which executes
the statement(s) in command, analogous to the shell’s -c option. Since Python
statements often contain spaces or other characters that are special to the shell, it is
usually advised to quote command in its entirety with single quotes.
Some Python modules are also useful as scripts. These can be invoked using python -
m module [arg] ..., which executes the source file for module as if you had spelled out
its full name on the command line.
When a script file is used, it is sometimes useful to be able to run the script and enter
interactive mode afterwards. This can be done by passing -i before the script.
All command line options are described in Command line and environment.
$ python3.4
Python 3.4 (default, Mar 16 2014, 09:25:04)
[GCC 4.8.2] on linux
Type "help", "copyright", "credits" or "license" for more
information.
>>>
Continuation lines are needed when entering a multi-line construct. As an example, take
a look at this if statement:
>>>
It is also possible to specify a different encoding for source files. In order to do this, put
one more special comment line right after the #! line to define the source file encoding:
With that declaration, everything in the source file will be treated as having the
encoding encoding instead of UTF-8. The list of possible encodings can be found in the
Python Library Reference, in the section on codecs.
For example, if your editor of choice does not support UTF-8 encoded files and insists
on using some other encoding, say Windows-1252, you can write:
and still use all characters in the Windows-1252 character set in the source files. The
special encoding comment must be in the first or second line within the file.
Footnotes
On Unix, the Python 3.x interpreter is by default not installed with the executable
[1
named python, so that it does not conflict with a simultaneously installed Python 2.x
]
executable.
Many of the examples in this manual, even those entered at the interactive prompt,
include comments. Comments in Python start with the hash character, #, and extend to
the end of the physical line. A comment may appear at the start of a line or following
whitespace or code, but not within a string literal. A hash character within a string literal
is just a hash character. Since comments are to clarify code and are not interpreted by
Python, they may be omitted when typing in examples.
Some examples:
3.1.1. Numbers
The interpreter acts as a simple calculator: you can type an expression at it and it will
write the value. Expression syntax is straightforward: the operators +, -, * and / work just
like in most other languages (for example, Pascal or C); parentheses ( ()) can be used
for grouping. For example:
>>>
>>> 2 + 2
4
>>> 50 - 5*6
20
>>> (50 - 5*6) / 4
5.0
>>> 8 / 5 # division always returns a floating point number
1.6
The integer numbers (e.g. 2, 4, 20) have type int, the ones with a fractional part
(e.g. 5.0, 1.6) have type float. We will see more about numeric types later in the tutorial.
Division (/) always returns a float. To do floor division and get an integer result
(discarding any fractional result) you can use the // operator; to calculate the remainder
you can use %:
>>>
>>>
>>> 5 ** 2 # 5 squared
25
>>> 2 ** 7 # 2 to the power of 7
128
The equal sign (=) is used to assign a value to a variable. Afterwards, no result is
displayed before the next interactive prompt:
>>>
>>> width = 20
>>> height = 5 * 9
>>> width * height
900
If a variable is not “defined” (assigned a value), trying to use it will give you an error:
>>>
There is full support for floating point; operators with mixed type operands convert the
integer operand to floating point:
>>>
In interactive mode, the last printed expression is assigned to the variable _. This
means that when you are using Python as a desk calculator, it is somewhat easier to
continue calculations, for example:
>>>
This variable should be treated as read-only by the user. Don’t explicitly assign a value
to it — you would create an independent local variable with the same name masking the
built-in variable with its magic behavior.
In addition to int and float, Python supports other types of numbers, such
as Decimal and Fraction. Python also has built-in support for complex numbers, and uses
the j or J suffix to indicate the imaginary part (e.g. 3+5j).
3.1.2. Strings
Besides numbers, Python can also manipulate strings, which can be expressed in
several ways. They can be enclosed in single quotes ( '...') or double quotes ("...") with
the same result [2]. \ can be used to escape quotes:
>>>
In the interactive interpreter, the output string is enclosed in quotes and special
characters are escaped with backslashes. While this might sometimes look different
from the input (the enclosing quotes could change), the two strings are equivalent. The
string is enclosed in double quotes if the string contains a single quote and no double
quotes, otherwise it is enclosed in single quotes. The print() function produces a more
readable output, by omitting the enclosing quotes and by printing escaped and special
characters:
>>>
>>>
String literals can span multiple lines. One way is using triple-quotes: """...""" or '''...'''.
End of lines are automatically included in the string, but it’s possible to prevent this by
adding a \ at the end of the line. The following example:
print("""\
Usage: thingy [OPTIONS]
-h Display this usage message
-H hostname Hostname to connect to
""")
produces the following output (note that the initial newline is not included):
Strings can be concatenated (glued together) with the + operator, and repeated with *:
>>>
Two or more string literals (i.e. the ones enclosed between quotes) next to each other
are automatically concatenated.
>>>
This only works with two literals though, not with variables or expressions:
>>>
>>>
This feature is particularly useful when you want to break long strings:
>>>
Strings can be indexed (subscripted), with the first character having index 0. There is no
separate character type; a character is simply a string of size one:
>>>
Indices may also be negative numbers, to start counting from the right:
>>>
Note that since -0 is the same as 0, negative indices start from -1.
>>>
Note how the start is always included, and the end always excluded. This makes sure
that s[:i] + s[i:] is always equal to s:
>>>
Slice indices have useful defaults; an omitted first index defaults to zero, an omitted
second index defaults to the size of the string being sliced.
>>>
>>> word[:2] # character from the beginning to position 2
(excluded)
'Py'
>>> word[4:] # characters from position 4 (included) to the end
'on'
>>> word[-2:] # characters from the second-last (included) to the
end
'on'
+---+---+---+---+---+---+
| P | y | t | h | o | n |
+---+---+---+---+---+---+
0 1 2 3 4 5 6
-6 -5 -4 -3 -2 -1
The first row of numbers gives the position of the indices 0...6 in the string; the second
row gives the corresponding negative indices. The slice from i to j consists of all
characters between the edges labeled i and j, respectively.
For non-negative indices, the length of a slice is the difference of the indices, if both are
within bounds. For example, the length of word[1:3] is 2.
>>>
However, out of range slice indexes are handled gracefully when used for slicing:
>>>
>>> word[4:42]
'on'
>>> word[42:]
''
>>>
>>>
>>>
>>> s = 'supercalifragilisticexpialidocious'
>>> len(s)
34
See also
Text Sequence Type — str
Strings are examples of sequence types, and support the common operations supported
by such types.
String Methods
Strings support a large number of methods for basic transformations and searching.
String Formatting
Information about string formatting with str.format() is described here.
printf-style String Formatting
The old formatting operations invoked when strings and Unicode strings are the left
operand of the % operator are described in more detail here.
3.1.3. Lists
Python knows a number of compound data types, used to group together other values.
The most versatile is the list, which can be written as a list of comma-separated values
(items) between square brackets. Lists might contain items of different types, but
usually the items all have the same type.
>>>
Like strings (and all other built-in sequence type), lists can be indexed and sliced:
>>>
All slice operations return a new list containing the requested elements. This means that
the following slice returns a new (shallow) copy of the list:
>>>
>>> squares[:]
[1, 4, 9, 16, 25]
>>>
>>>
You can also add new items at the end of the list, by using the append() method (we will
see more about methods later):
>>>
Assignment to slices is also possible, and this can even change the size of the list or
clear it entirely:
>>>
>>>
It is possible to nest lists (create lists containing other lists), for example:
>>>
>>>
The first line contains a multiple assignment: the variables a and b simultaneously get
the new values 0 and 1. On the last line this is used again, demonstrating that the
expressions on the right-hand side are all evaluated first before any of the assignments
take place. The right-hand side expressions are evaluated from the left to the right.
The while loop executes as long as the condition (here: b < 10) remains true. In Python,
like in C, any non-zero integer value is true; zero is false. The condition may also be a
string or list value, in fact any sequence; anything with a non-zero length is true, empty
sequences are false. The test used in the example is a simple comparison. The
standard comparison operators are written the same as in C: < (less than), > (greater
than), == (equal to), <= (less than or equal to), >= (greater than or equal to) and != (not
equal to).
The body of the loop is indented: indentation is Python’s way of grouping statements. At
the interactive prompt, you have to type a tab or space(s) for each indented line. In
practice you will prepare more complicated input for Python with a text editor; all decent
text editors have an auto-indent facility. When a compound statement is entered
interactively, it must be followed by a blank line to indicate completion (since the parser
cannot guess when you have typed the last line). Note that each line within a basic
block must be indented by the same amount.
The print() function writes the value of the argument(s) it is given. It differs from just
writing the expression you want to write (as we did earlier in the calculator examples) in
the way it handles multiple arguments, floating point quantities, and strings. Strings are
printed without quotes, and a space is inserted between items, so you can format things
nicely, like this:
>>>
>>> i = 256*256
>>> print('The value of i is', i)
The value of i is 65536
The keyword argument end can be used to avoid the newline after the output, or end
the output with a different string:
>>>
>>> a, b = 0, 1
>>> while b < 1000:
... print(b, end=',')
... a, b = b, a+b
...
1,1,2,3,5,8,13,21,34,55,89,144,233,377,610,987,
Footnotes
4.1. if Statements
Perhaps the most well-known statement type is the if statement. For example:
>>>
There can be zero or more elif parts, and the else part is optional. The keyword
‘elif‘ is short for ‘else if’, and is useful to avoid excessive indentation.
An if ... elif ... elif ... sequence is a substitute for the switch or case statements
found in other languages.
>>>
If you need to modify the sequence you are iterating over while inside the loop (for
example to duplicate selected items), it is recommended that you first make a copy.
Iterating over a sequence does not implicitly make a copy. The slice notation makes this
especially convenient:
>>>
>>>
The given end point is never part of the generated sequence; range(10) generates 10
values, the legal indices for items of a sequence of length 10. It is possible to let the
range start at another number, or to specify a different increment (even negative;
sometimes this is called the ‘step’):
range(5, 10)
5 through 9
range(0, 10, 3)
0, 3, 6, 9
To iterate over the indices of a sequence, you can combine range() and len() as
follows:
>>>
>>>
>>> print(range(10))
range(0, 10)
In many ways the object returned by range() behaves as if it is a list, but in fact it isn’t.
It is an object which returns the successive items of the desired sequence when you
iterate over it, but it doesn’t really make the list, thus saving space.
We say such an object is iterable, that is, suitable as a target for functions and
constructs that expect something from which they can obtain successive items until the
supply is exhausted. We have seen that the for statement is such an iterator. The
function list() is another; it creates lists from iterables:
>>>
>>> list(range(5))
[0, 1, 2, 3, 4]
Later we will see more functions that return iterables and take iterables as argument.
Loop statements may have an else clause; it is executed when the loop terminates
through exhaustion of the list (with for) or when the condition becomes false
(with while), but not when the loop is terminated by a break statement. This is
exemplified by the following loop, which searches for prime numbers:
>>>
>>> for n in range(2, 10):
... for x in range(2, n):
... if n % x == 0:
... print(n, 'equals', x, '*', n//x)
... break
... else:
... # loop fell through without finding a factor
... print(n, 'is a prime number')
...
2 is a prime number
3 is a prime number
4 equals 2 * 2
5 is a prime number
6 equals 2 * 3
7 is a prime number
8 equals 2 * 4
9 equals 3 * 3
(Yes, this is the correct code. Look closely: the else clause belongs to
the for loop, not the if statement.)
When used with a loop, the else clause has more in common with the else clause of
a try statement than it does that of if statements: a try statement’s else clause runs
when no exception occurs, and a loop’s else clause runs when no break occurs. For
more on the try statement and exceptions, see Handling Exceptions.
The continue statement, also borrowed from C, continues with the next iteration of the
loop:
>>>
>>>
>>>
Another place pass can be used is as a place-holder for a function or conditional body
when you are working on new code, allowing you to keep thinking at a more abstract
level. The pass is silently ignored:
>>>
>>>
The keyword def introduces a function definition. It must be followed by the function
name and the parenthesized list of formal parameters. The statements that form the
body of the function start at the next line, and must be indented.
The first statement of the function body can optionally be a string literal; this string literal
is the function’s documentation string, or docstring. (More about docstrings can be
found in the section Documentation Strings.) There are tools which use docstrings to
automatically produce online or printed documentation, or to let the user interactively
browse through code; it’s good practice to include docstrings in code that you write, so
make a habit of it.
The execution of a function introduces a new symbol table used for the local variables
of the function. More precisely, all variable assignments in a function store the value in
the local symbol table; whereas variable references first look in the local symbol table,
then in the local symbol tables of enclosing functions, then in the global symbol table,
and finally in the table of built-in names. Thus, global variables cannot be directly
assigned a value within a function (unless named in a global statement), although they
may be referenced.
The actual parameters (arguments) to a function call are introduced in the local symbol
table of the called function when it is called; thus, arguments are passed using call by
value (where the value is always an object reference, not the value of the
object). [1] When a function calls another function, a new local symbol table is created
for that call.
A function definition introduces the function name in the current symbol table. The value
of the function name has a type that is recognized by the interpreter as a user-defined
function. This value can be assigned to another name which can then also be used as a
function. This serves as a general renaming mechanism:
>>>
>>> fib
<function fib at 10042ed0>
>>> f = fib
>>> f(100)
0 1 1 2 3 5 8 13 21 34 55 89
Coming from other languages, you might object that fib is not a function but a
procedure since it doesn’t return a value. In fact, even functions without
a return statement do return a value, albeit a rather boring one. This value is
called None (it’s a built-in name). Writing the value None is normally suppressed by the
interpreter if it would be the only value written. You can see it if you really want to
using print():
>>>
>>> fib(0)
>>> print(fib(0))
None
It is simple to write a function that returns a list of the numbers of the Fibonacci series,
instead of printing it:
>>>
The return statement returns with a value from a function. return without an
expression argument returns None. Falling off the end of a function also returns None.
The statement result.append(a) calls a method of the list object result. A
method is a function that ‘belongs’ to an object and is named obj.methodname,
where obj is some object (this may be an expression), and methodname is the name
of a method that is defined by the object’s type. Different types define different methods.
Methods of different types may have the same name without causing ambiguity. (It is
possible to define your own object types and methods, using classes, see Classes) The
method append() shown in the example is defined for list objects; it adds a new
element at the end of the list. In this example it is equivalent
to result = result + [a], but more efficient.
The default values are evaluated at the point of function definition in the defining scope,
so that
i = 5
def f(arg=i):
print(arg)
i = 6
f()
will print 5.
Important warning: The default value is evaluated only once. This makes a difference
when the default is a mutable object such as a list, dictionary, or instances of most
classes. For example, the following function accumulates the arguments passed to it on
subsequent calls:
print(f(1))
print(f(2))
print(f(3))
[1]
[1, 2]
[1, 2, 3]
If you don’t want the default to be shared between subsequent calls, you can write the
function like this instead:
parrot(1000) # 1
positional argument
parrot(voltage=1000) # 1 keyword
argument
parrot(voltage=1000000, action='VOOOOOM') # 2 keyword
arguments
parrot(action='VOOOOOM', voltage=1000000) # 2 keyword
arguments
parrot('a million', 'bereft of life', 'jump') # 3
positional arguments
parrot('a thousand', state='pushing up the daisies') # 1
positional, 1 keyword
In a function call, keyword arguments must follow positional arguments. All the keyword
arguments passed must match one of the arguments accepted by the function
(e.g. actor is not a valid argument for the parrot function), and their order is not
important. This also includes non-optional arguments (e.g. parrot(voltage=1000) is
valid too). No argument may receive a value more than once. Here’s an example that
fails due to this restriction:
>>>
When a final formal parameter of the form **name is present, it receives a dictionary
(see Mapping Types — dict) containing all keyword arguments except for those
corresponding to a formal parameter. This may be combined with a formal parameter of
the form *name (described in the next subsection) which receives a tuple containing the
positional arguments beyond the formal parameter list. ( *name must occur
before **name.) For example, if we define a function like this:
Note that the list of keyword argument names is created by sorting the result of the
keywords dictionary’s keys() method before printing its contents; if this is not done, the
order in which the arguments are printed is undefined.
Normally, these variadic arguments will be last in the list of formal parameters,
because they scoop up all remaining input arguments that are passed to the function.
Any formal parameters which occur after the *args parameter are ‘keyword-only’
arguments, meaning that they can only be used as keywords rather than positional
arguments.
>>>
>>>
In the same fashion, dictionaries can deliver keyword arguments with the **-operator:
>>>
>>>
>>> def make_incrementor(n):
... return lambda x: x + n
...
>>> f = make_incrementor(42)
>>> f(0)
42
>>> f(1)
43
The above example uses a lambda expression to return a function. Another use is to
pass a small function as an argument:
>>>
>>> pairs = [(1, 'one'), (2, 'two'), (3, 'three'), (4, 'four')]
>>> pairs.sort(key=lambda pair: pair[1])
>>> pairs
[(4, 'four'), (1, 'one'), (3, 'three'), (2, 'two')]
The first line should always be a short, concise summary of the object’s purpose. For
brevity, it should not explicitly state the object’s name or type, since these are available
by other means (except if the name happens to be a verb describing a function’s
operation). This line should begin with a capital letter and end with a period.
If there are more lines in the documentation string, the second line should be blank,
visually separating the summary from the rest of the description. The following lines
should be one or more paragraphs describing the object’s calling conventions, its side
effects, etc.
The Python parser does not strip indentation from multi-line string literals in Python, so
tools that process documentation have to strip indentation if desired. This is done using
the following convention. The first non-blank line after the first line of the string
determines the amount of indentation for the entire documentation string. (We can’t use
the first line since it is generally adjacent to the string’s opening quotes so its
indentation is not apparent in the string literal.) Whitespace “equivalent” to this
indentation is then stripped from the start of all lines of the string. Lines that are
indented less should not occur, but if they occur all their leading whitespace should be
stripped. Equivalence of whitespace should be tested after expansion of tabs (to 8
spaces, normally).
>>>
>>>
For Python, PEP 8 has emerged as the style guide that most projects adhere to; it
promotes a very readable and eye-pleasing coding style. Every Python developer
should read it at some point; here are the most important points extracted for you:
This helps users with small displays and makes it possible to have several code
files side-by-side on larger displays.
Use blank lines to separate functions and classes, and larger blocks of code
inside functions.
Use docstrings.
Use spaces around operators and after commas, but not directly inside
bracketing constructs: a = f(1, 2) + g(3, 4).
Likewise, don’t use non-ASCII characters in identifiers if there is only the slightest
chance people speaking a different language will read or maintain the code.
Footnotes
[1 Actually, call by object reference would be a better description, since if a mutable object is
] passed, the caller will see any changes the callee makes to it (items inserted into a list).
5. Data Structures
This chapter describes some things you’ve learned about already in more detail, and
adds some new things as well.
list.append(x)
list.extend(L)
Extend the list by appending all the items in the given list. Equivalent
to a[len(a):] = L.
list.insert(i, x)
Insert an item at a given position. The first argument is the index of the element
before which to insert, so a.insert(0, x) inserts at the front of the list,
and a.insert(len(a), x) is equivalent to a.append(x).
list.remove(x)
Remove the first item from the list whose value is x. It is an error if there is no
such item.
list.pop([i])
Remove the item at the given position in the list, and return it. If no index is
specified, a.pop() removes and returns the last item in the list. (The square
brackets around the i in the method signature denote that the parameter is
optional, not that you should type square brackets at that position. You will see
this notation frequently in the Python Library Reference.)
list.clear()
list.index(x)
Return the index in the list of the first item whose value is x. It is an error if there
is no such item.
list.count(x)
list.sort()
list.reverse()
list.copy()
>>>
You might have noticed that methods like insert, remove or sort that only modify the
list have no return value printed – they return the default None. [1] This is a design
principle for all mutable data structures in Python.
>>>
>>>
>>>
>>> squares = []
>>> for x in range(10):
... squares.append(x**2)
...
>>> squares
[0, 1, 4, 9, 16, 25, 36, 49, 64, 81]
Note that this creates (or overwrites) a variable named x that still exists after the loop
completes. We can calculate the list of squares without any side effects using:
or, equivalently:
>>>
>>>
>>> combs = []
>>> for x in [1,2,3]:
... for y in [3,1,4]:
... if x != y:
... combs.append((x, y))
...
>>> combs
[(1, 3), (1, 4), (2, 3), (2, 1), (2, 4), (3, 1), (3, 4)]
Note how the order of the for and if statements is the same in both these snippets.
If the expression is a tuple (e.g. the (x, y) in the previous example), it must be
parenthesized.
>>>
>>>
>>>
>>> matrix = [
... [1, 2, 3, 4],
... [5, 6, 7, 8],
... [9, 10, 11, 12],
... ]
>>>
As we saw in the previous section, the nested listcomp is evaluated in the context of
the for that follows it, so this example is equivalent to:
>>>
>>> transposed = []
>>> for i in range(4):
... transposed.append([row[i] for row in matrix])
...
>>> transposed
[[1, 5, 9], [2, 6, 10], [3, 7, 11], [4, 8, 12]]
>>>
>>> transposed = []
>>> for i in range(4):
... # the following 3 lines implement the nested listcomp
... transposed_row = []
... for row in matrix:
... transposed_row.append(row[i])
... transposed.append(transposed_row)
...
>>> transposed
[[1, 5, 9], [2, 6, 10], [3, 7, 11], [4, 8, 12]]
In the real world, you should prefer built-in functions to complex flow statements.
The zip() function would do a great job for this use case:
>>>
>>> list(zip(*matrix))
[(1, 5, 9), (2, 6, 10), (3, 7, 11), (4, 8, 12)]
See Unpacking Argument Lists for details on the asterisk in this line.
>>>
>>>
>>> del a
Referencing the name a hereafter is an error (at least until another value is assigned to
it). We’ll find other uses for del later.
5.3. Tuples and Sequences
We saw that lists and strings have many common properties, such as indexing and
slicing operations. They are two examples of sequence data types (see Sequence
Types — list, tuple, range). Since Python is an evolving language, other sequence data
types may be added. There is also another standard sequence data type: the tuple.
>>>
As you see, on output tuples are always enclosed in parentheses, so that nested tuples
are interpreted correctly; they may be input with or without surrounding parentheses,
although often parentheses are necessary anyway (if the tuple is part of a larger
expression). It is not possible to assign to the individual items of a tuple, however it is
possible to create tuples which contain mutable objects, such as lists.
Though tuples may seem similar to lists, they are often used in different situations and
for different purposes. Tuples are immutable, and usually contain an heterogeneous
sequence of elements that are accessed via unpacking (see later in this section) or
indexing (or even by attribute in the case of namedtuples). Lists are mutable, and their
elements are usually homogeneous and are accessed by iterating over the list.
A special problem is the construction of tuples containing 0 or 1 items: the syntax has
some extra quirks to accommodate these. Empty tuples are constructed by an empty
pair of parentheses; a tuple with one item is constructed by following a value with a
comma (it is not sufficient to enclose a single value in parentheses). Ugly, but effective.
For example:
>>>
>>> empty = ()
>>> singleton = 'hello', # <-- note trailing comma
>>> len(empty)
0
>>> len(singleton)
1
>>> singleton
('hello',)
>>>
>>> x, y, z = t
This is called, appropriately enough, sequence unpacking and works for any sequence
on the right-hand side. Sequence unpacking requires that there are as many variables
on the left side of the equals sign as there are elements in the sequence. Note that
multiple assignment is really just a combination of tuple packing and sequence
unpacking.
5.4. Sets
Python also includes a data type for sets. A set is an unordered collection with no
duplicate elements. Basic uses include membership testing and eliminating duplicate
entries. Set objects also support mathematical operations like union, intersection,
difference, and symmetric difference.
Curly braces or the set() function can be used to create sets. Note: to create an empty
set you have to use set(), not {}; the latter creates an empty dictionary, a data
structure that we discuss in the next section.
>>>
>>>
It is best to think of a dictionary as an unordered set of key: value pairs, with the
requirement that the keys are unique (within one dictionary). A pair of braces creates an
empty dictionary: {}. Placing a comma-separated list of key:value pairs within the
braces adds initial key:value pairs to the dictionary; this is also the way dictionaries are
written on output.
The main operations on a dictionary are storing a value with some key and extracting
the value given the key. It is also possible to delete a key:value pair with del. If you
store using a key that is already in use, the old value associated with that key is
forgotten. It is an error to extract a value using a non-existent key.
Performing list(d.keys()) on a dictionary returns a list of all the keys used in the
dictionary, in arbitrary order (if you want it sorted, just
use sorted(d.keys()) instead). [2] To check whether a single key is in the dictionary,
use the in keyword.
>>>
The dict() constructor builds dictionaries directly from sequences of key-value pairs:
>>>
In addition, dict comprehensions can be used to create dictionaries from arbitrary key
and value expressions:
>>>
When the keys are simple strings, it is sometimes easier to specify pairs using keyword
arguments:
>>>
>>>
When looping through a sequence, the position index and corresponding value can be
retrieved at the same time using the enumerate() function.
>>>
To loop over two or more sequences at the same time, the entries can be paired with
the zip() function.
>>>
To loop over a sequence in reverse, first specify the sequence in a forward direction
and then call the reversed() function.
>>>
To loop over a sequence in sorted order, use the sorted() function which returns a
new sorted list while leaving the source unaltered.
>>>
To change a sequence you are iterating over while inside the loop (for example to
duplicate certain items), it is recommended that you first make a copy. Looping over a
sequence does not implicitly make a copy. The slice notation makes this especially
convenient:
>>>
The comparison operators in and not in check whether a value occurs (does not
occur) in a sequence. The operators is and is not compare whether two objects are
really the same object; this only matters for mutable objects like lists. All comparison
operators have the same priority, which is lower than that of all numerical operators.
Comparisons can be chained. For example, a < b == c tests whether a is less
than b and moreover b equals c.
Comparisons may be combined using the Boolean operators and and or, and the
outcome of a comparison (or of any other Boolean expression) may be negated
with not. These have lower priorities than comparison operators; between
them, not has the highest priority and or the lowest, so that A and not B or C is
equivalent to (A and (not B)) or C. As always, parentheses can be used to
express the desired composition.
The Boolean operators and and or are so-called short-circuit operators: their
arguments are evaluated from left to right, and evaluation stops as soon as the outcome
is determined. For example, if A and C are true but B is false, A and B and C does not
evaluate the expression C. When used as a general value and not as a Boolean, the
return value of a short-circuit operator is the last evaluated argument.
>>>
Note that comparing objects of different types with < or > is legal provided that the
objects have appropriate comparison methods. For example, mixed numeric types are
compared according to their numeric value, so 0 equals 0.0, etc. Otherwise, rather than
providing an arbitrary ordering, the interpreter will raise a TypeError exception.
Footnotes
Other languages may return the mutated object, which allows method chaining,
[1]
such as d->insert("a")->remove("b")->sort();.
Calling d.keys() will return a dictionary view object. It supports operations like
[2] membership test and iteration, but its contents are not independent of the
original dictionary – it is only a view.
6. Modules
If you quit from the Python interpreter and enter it again, the definitions you have made
(functions and variables) are lost. Therefore, if you want to write a somewhat longer
program, you are better off using a text editor to prepare the input for the interpreter and
running it with that file as input instead. This is known as creating a script. As your
program gets longer, you may want to split it into several files for easier maintenance.
You may also want to use a handy function that you’ve written in several programs
without copying its definition into each program.
To support this, Python has a way to put definitions in a file and use them in a script or
in an interactive instance of the interpreter. Such a file is called a module; definitions
from a module can be imported into other modules or into the main module (the
collection of variables that you have access to in a script executed at the top level and
in calculator mode).
A module is a file containing Python definitions and statements. The file name is the
module name with the suffix .py appended. Within a module, the module’s name (as a
string) is available as the value of the global variable __name__. For instance, use your
favorite text editor to create a file called fibo.py in the current directory with the following
contents:
Now enter the Python interpreter and import this module with the following command:
>>>
This does not enter the names of the functions defined in fibo directly in the current
symbol table; it only enters the module name fibo there. Using the module name you
can access the functions:
>>>
>>> fibo.fib(1000)
1 1 2 3 5 8 13 21 34 55 89 144 233 377 610 987
>>> fibo.fib2(100)
[1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89]
>>> fibo.__name__
'fibo'
If you intend to use a function often you can assign it to a local name:
>>>
Each module has its own private symbol table, which is used as the global symbol table
by all functions defined in the module. Thus, the author of a module can use global
variables in the module without worrying about accidental clashes with a user’s global
variables. On the other hand, if you know what you are doing you can touch a module’s
global variables with the same notation used to refer to its functions, modname.itemname.
Modules can import other modules. It is customary but not required to place
all import statements at the beginning of a module (or script, for that matter). The
imported module names are placed in the importing module’s global symbol table.
There is a variant of the import statement that imports names from a module directly into
the importing module’s symbol table. For example:
>>>
>>>
This imports all names except those beginning with an underscore ( _). In most cases
Python programmers do not use this facility since it introduces an unknown set of
names into the interpreter, possibly hiding some things you have already defined.
Note that in general the practice of importing * from a module or package is frowned
upon, since it often causes poorly readable code. However, it is okay to use it to save
typing in interactive sessions.
Note
For efficiency reasons, each module is only imported once per interpreter session.
Therefore, if you change your modules, you must restart the interpreter – or, if it’s just
one module you want to test interactively, use imp.reload(),
e.g. import imp; imp.reload(modulename).
6.1.1. Executing modules as scripts
When you run a Python module with
the code in the module will be executed, just as if you imported it, but with
the __name__ set to "__main__". That means that by adding this code at the end of your
module:
if __name__ == "__main__":
import sys
fib(int(sys.argv[1]))
you can make the file usable as a script as well as an importable module, because the
code that parses the command line only runs if the module is executed as the “main”
file:
$ python fibo.py 50
1 1 2 3 5 8 13 21 34
>>>
This is often used either to provide a convenient user interface to a module, or for
testing purposes (running the module as a script executes a test suite).
The directory containing the input script (or the current directory when no file is
specified).
PYTHONPATH (a list of directory names, with the same syntax as the shell variable PATH).
The installation-dependent default.
Note
On file systems which support symlinks, the directory containing the input script is
calculated after the symlink is followed. In other words the directory containing the
symlink is not added to the module search path.
After initialization, Python programs can modify sys.path. The directory containing the
script being run is placed at the beginning of the search path, ahead of the standard
library path. This means that scripts in that directory will be loaded instead of modules
of the same name in the library directory. This is an error unless the replacement is
intended. See section Standard Modules for more information.
6.1.3. “Compiled” Python files
To speed up loading modules, Python caches the compiled version of each module in
the __pycache__ directory under the name module.version.pyc, where the version encodes
the format of the compiled file; it generally contains the Python version number. For
example, in CPython release 3.3 the compiled version of spam.py would be cached
as __pycache__/spam.cpython-33.pyc. This naming convention allows compiled modules
from different releases and different versions of Python to coexist.
Python checks the modification date of the source against the compiled version to see if
it’s out of date and needs to be recompiled. This is a completely automatic process.
Also, the compiled modules are platform-independent, so the same library can be
shared among systems with different architectures.
Python does not check the cache in two circumstances. First, it always recompiles and
does not store the result for the module that’s loaded directly from the command line.
Second, it does not check the cache if there is no source module. To support a non-
source (compiled only) distribution, the compiled module must be in the source
directory, and there must not be a source module.
You can use the -O or -OO switches on the Python command to reduce the size of a
compiled module. The -O switch removes assert statements, the -OO switch removes
both assert statements and __doc__ strings. Since some programs may rely on having
these available, you should only use this option if you know what you’re doing.
“Optimized” modules have a .pyo rather than a .pyc suffix and are usually smaller.
Future releases may change the effects of optimization.
A program doesn’t run any faster when it is read from a .pyc or .pyo file than when it is
read from a .py file; the only thing that’s faster about .pyc or .pyo files is the speed with
which they are loaded.
The module compileall can create .pyc files (or .pyo files when -O is used) for all
modules in a directory.
There is more detail on this process, including a flow chart of the decisions, in PEP
3147.
>>>
These two variables are only defined if the interpreter is in interactive mode.
The variable sys.path is a list of strings that determines the interpreter’s search path for
modules. It is initialized to a default path taken from the environment
variable PYTHONPATH, or from a built-in default if PYTHONPATH is not set. You can
modify it using standard list operations:
>>>
>>>
Without arguments, dir() lists the names you have defined currently:
>>>
>>> a = [1, 2, 3, 4, 5]
>>> import fibo
>>> fib = fibo.fib
>>> dir()
['__builtins__', '__name__', 'a', 'fib', 'fibo', 'sys']
Note that it lists all types of names: variables, modules, functions, etc.
dir() does not list the names of built-in functions and variables. If you want a list of those,
they are defined in the standard module builtins:
>>>
>>> import builtins
>>> dir(builtins)
['ArithmeticError', 'AssertionError', 'AttributeError',
'BaseException',
'BlockingIOError', 'BrokenPipeError', 'BufferError',
'BytesWarning',
'ChildProcessError', 'ConnectionAbortedError', 'ConnectionError',
'ConnectionRefusedError', 'ConnectionResetError',
'DeprecationWarning',
'EOFError', 'Ellipsis', 'EnvironmentError', 'Exception', 'False',
'FileExistsError', 'FileNotFoundError', 'FloatingPointError',
'FutureWarning', 'GeneratorExit', 'IOError', 'ImportError',
'ImportWarning', 'IndentationError', 'IndexError',
'InterruptedError',
'IsADirectoryError', 'KeyError', 'KeyboardInterrupt',
'LookupError',
'MemoryError', 'NameError', 'None', 'NotADirectoryError',
'NotImplemented',
'NotImplementedError', 'OSError', 'OverflowError',
'PendingDeprecationWarning', 'PermissionError',
'ProcessLookupError',
'ReferenceError', 'ResourceWarning', 'RuntimeError',
'RuntimeWarning',
'StopIteration', 'SyntaxError', 'SyntaxWarning', 'SystemError',
'SystemExit', 'TabError', 'TimeoutError', 'True', 'TypeError',
'UnboundLocalError', 'UnicodeDecodeError', 'UnicodeEncodeError',
'UnicodeError', 'UnicodeTranslateError', 'UnicodeWarning',
'UserWarning',
'ValueError', 'Warning', 'ZeroDivisionError', '_',
'__build_class__',
'__debug__', '__doc__', '__import__', '__name__', '__package__',
'abs',
'all', 'any', 'ascii', 'bin', 'bool', 'bytearray', 'bytes',
'callable',
'chr', 'classmethod', 'compile', 'complex', 'copyright',
'credits',
'delattr', 'dict', 'dir', 'divmod', 'enumerate', 'eval', 'exec',
'exit',
'filter', 'float', 'format', 'frozenset', 'getattr', 'globals',
'hasattr',
'hash', 'help', 'hex', 'id', 'input', 'int', 'isinstance',
'issubclass',
'iter', 'len', 'license', 'list', 'locals', 'map', 'max',
'memoryview',
'min', 'next', 'object', 'oct', 'open', 'ord', 'pow', 'print',
'property',
'quit', 'range', 'repr', 'reversed', 'round', 'set', 'setattr',
'slice',
'sorted', 'staticmethod', 'str', 'sum', 'super', 'tuple', 'type',
'vars',
'zip']
6.4. Packages
Packages are a way of structuring Python’s module namespace by using “dotted
module names”. For example, the module name A.B designates a submodule
named B in a package named A. Just like the use of modules saves the authors of
different modules from having to worry about each other’s global variable names, the
use of dotted module names saves the authors of multi-module packages like NumPy or
the Python Imaging Library from having to worry about each other’s module names.
Suppose you want to design a collection of modules (a “package”) for the uniform
handling of sound files and sound data. There are many different sound file formats
(usually recognized by their extension, for example: .wav, .aiff, .au), so you may need to
create and maintain a growing collection of modules for the conversion between the
various file formats. There are also many different operations you might want to perform
on sound data (such as mixing, adding echo, applying an equalizer function, creating an
artificial stereo effect), so in addition you will be writing a never-ending stream of
modules to perform these operations. Here’s a possible structure for your package
(expressed in terms of a hierarchical filesystem):
The __init__.py files are required to make Python treat the directories as containing
packages; this is done to prevent directories with a common name, such as string, from
unintentionally hiding valid modules that occur later on the module search path. In the
simplest case, __init__.py can just be an empty file, but it can also execute initialization
code for the package or set the __all__ variable, described later.
Users of the package can import individual modules from the package, for example:
import sound.effects.echo
This loads the submodule sound.effects.echo. It must be referenced with its full name.
This also loads the submodule echo, and makes it available without its package prefix,
so it can be used as follows:
Note that when using from package import item, the item can be either a submodule (or
subpackage) of the package, or some other name defined in the package, like a
function, class or variable. The import statement first tests whether the item is defined in
the package; if not, it assumes it is a module and attempts to load it. If it fails to find it,
an ImportError exception is raised.
Contrarily, when using syntax like import item.subitem.subsubitem, each item except for
the last must be a package; the last item can be a module or a package but can’t be a
class or function or variable defined in the previous item.
The only solution is for the package author to provide an explicit index of the package.
The import statement uses the following convention: if a package’s __init__.py code
defines a list named __all__, it is taken to be the list of module names that should be
imported when from package import * is encountered. It is up to the package author to
keep this list up-to-date when a new version of the package is released. Package
authors may also decide not to support it, if they don’t see a use for importing * from
their package. For example, the file sound/effects/__init__.py could contain the following
code:
This would mean that from sound.effects import * would import the three named
submodules of the sound package.
If __all__ is not defined, the statement from sound.effects import * does not import all
submodules from the package sound.effects into the current namespace; it only ensures
that the package sound.effects has been imported (possibly running any initialization code
in __init__.py) and then imports whatever names are defined in the package. This
includes any names defined (and submodules explicitly loaded) by __init__.py. It also
includes any submodules of the package that were explicitly loaded by
previous import statements. Consider this code:
import sound.effects.echo
import sound.effects.surround
from sound.effects import *
In this example, the echo and surround modules are imported in the current namespace
because they are defined in the sound.effects package when the from...import statement is
executed. (This also works when __all__ is defined.)
Although certain modules are designed to export only names that follow certain patterns
when you use import *, it is still considered bad practise in production code.
Remember, there is nothing wrong with using from Package import specific_submodule! In
fact, this is the recommended notation unless the importing module needs to use
submodules with the same name from different packages.
You can also write relative imports, with the from module import name form of import
statement. These imports use leading dots to indicate the current and parent packages
involved in the relative import. From the surround module for example, you might use:
Note that relative imports are based on the name of the current module. Since the name
of the main module is always "__main__", modules intended for use as the main module
of a Python application must always use absolute imports.
6.4.3. Packages in Multiple Directories
Packages support one more special attribute, __path__. This is initialized to be a list
containing the name of the directory holding the package’s __init__.py before the code in
that file is executed. This variable can be modified; doing so affects future searches for
modules and subpackages contained in the package.
While this feature is not often needed, it can be used to extend the set of modules found
in a package.
Footnotes
In fact function definitions are also ‘statements’ that are ‘executed’; the execution
[1] of a module-level function definition enters the function name in the module’s
global symbol table.
Often you’ll want more control over the formatting of your output than simply printing
space-separated values. There are two ways to format your output; the first way is to do
all the string handling yourself; using string slicing and concatenation operations you
can create any layout you can imagine. The string type has some methods that perform
useful operations for padding strings to a given column width; these will be discussed
shortly. The second way is to use the str.format() method.
The string module contains a Template class which offers yet another way to
substitute values into strings.
One question remains, of course: how do you convert values to strings? Luckily, Python
has ways to convert any value to a string: pass it to the repr() or str() functions.
The str() function is meant to return representations of values which are fairly human-
readable, while repr() is meant to generate representations which can be read by the
interpreter (or will force a SyntaxError if there is no equivalent syntax). For objects
which don’t have a particular representation for human consumption, str() will return
the same value as repr(). Many values, such as numbers or structures like lists and
dictionaries, have the same representation using either function. Strings, in particular,
have two distinct representations.
Some examples:
>>>
>>>
(Note that in the first example, one space between each column was added by the
way print() works: it always adds spaces between its arguments.)
This example demonstrates the str.rjust() method of string objects, which right-
justifies a string in a field of a given width by padding it with spaces on the left. There
are similar methods str.ljust() and str.center(). These methods do not write
anything, they just return a new string. If the input string is too long, they don’t truncate
it, but return it unchanged; this will mess up your column lay-out but that’s usually better
than the alternative, which would be lying about a value. (If you really want truncation
you can always add a slice operation, as in x.ljust(n)[:n].)
There is another method, str.zfill(), which pads a numeric string on the left with
zeros. It understands about plus and minus signs:
>>>
>>> '12'.zfill(5)
'00012'
>>> '-3.14'.zfill(7)
'-003.14'
>>> '3.14159265359'.zfill(5)
'3.14159265359'
>>>
The brackets and characters within them (called format fields) are replaced with the
objects passed into the str.format() method. A number in the brackets can be used
to refer to the position of the object passed into the str.format() method.
>>>
If keyword arguments are used in the str.format() method, their values are referred
to by using the name of the argument.
>>>
>>>
other='Georg'))
The story of Bill, Manfred, and Georg.
'!a' (apply ascii()), '!s' (apply str()) and '!r' (apply repr()) can be used to
convert the value before it is formatted:
>>>
An optional ':' and format specifier can follow the field name. This allows greater
control over how the value is formatted. The following example rounds Pi to three
places after the decimal.
>>>
Passing an integer after the ':' will cause that field to be a minimum number of
characters wide. This is useful for making tables pretty.
>>>
If you have a really long format string that you don’t want to split up, it would be nice if
you could reference the variables to be formatted by name instead of by position. This
can be done by simply passing the dict and using square brackets '[]' to access the
keys
>>>
>>> table = {'Sjoerd': 4127, 'Jack': 4098, 'Dcab': 8637678}
>>> print('Jack: {0[Jack]:d}; Sjoerd: {0[Sjoerd]:d}; '
... 'Dcab: {0[Dcab]:d}'.format(table))
Jack: 4098; Sjoerd: 4127; Dcab: 8637678
This could also be done by passing the table as keyword arguments with the ‘**’
notation.
>>>
This is particularly useful in combination with the built-in function vars(), which returns
a dictionary containing all local variables.
For a complete overview of string formatting with str.format(), see Format String
Syntax.
>>>
>>>
>>> f = open('workfile', 'w')
The first argument is a string containing the filename. The second argument is another
string containing a few characters describing the way in which the file will be
used. mode can be 'r' when the file will only be read, 'w' for only writing (an existing
file with the same name will be erased), and 'a' opens the file for appending; any data
written to the file is automatically added to the end. 'r+' opens the file for both reading
and writing. The mode argument is optional; 'r' will be assumed if it’s omitted.
Normally, files are opened in text mode, that means, you read and write strings from
and to the file, which are encoded in a specific encoding. If encoding is not specified,
the default is platform dependent (see open()). 'b' appended to the mode opens the
file in binary mode: now the data is read and written in the form of bytes objects. This
mode should be used for all files that don’t contain text.
In text mode, the default when reading is to convert platform-specific line endings ( \
n on Unix, \r\n on Windows) to just \n. When writing in text mode, the default is to
convert occurrences of \n back to platform-specific line endings. This behind-the-
scenes modification to file data is fine for text files, but will corrupt binary data like that
in JPEG or EXE files. Be very careful to use binary mode when reading and writing such
files.
To read a file’s contents, call f.read(size), which reads some quantity of data and
returns it as a string or bytes object. size is an optional numeric argument. When size is
omitted or negative, the entire contents of the file will be read and returned; it’s your
problem if the file is twice as large as your machine’s memory. Otherwise, at
most size bytes are read and returned. If the end of the file has been
reached, f.read() will return an empty string ('').
>>>
>>> f.read()
'This is the entire file.\n'
>>> f.read()
''
f.readline() reads a single line from the file; a newline character ( \n) is left at the
end of the string, and is only omitted on the last line of the file if the file doesn’t end in a
newline. This makes the return value unambiguous; if f.readline() returns an empty
string, the end of the file has been reached, while a blank line is represented by '\n', a
string containing only a single newline.
>>>
>>> f.readline()
'This is the first line of the file.\n'
>>> f.readline()
'Second line of the file\n'
>>> f.readline()
''
For reading lines from a file, you can loop over the file object. This is memory efficient,
fast, and leads to simple code:
>>>
If you want to read all the lines of a file in a list you can also
use list(f) or f.readlines().
f.write(string) writes the contents of string to the file, returning the number of
characters written.
>>>
>>>
>>> value = ('the answer', 42)
>>> s = str(value)
>>> f.write(s)
18
f.tell() returns an integer giving the file object’s current position in the file
represented as number of bytes from the beginning of the file when in binary mode and
an opaque number when in text mode.
To change the file object’s position, use f.seek(offset, from_what). The position
is computed from adding offset to a reference point; the reference point is selected by
the from_what argument. A from_what value of 0 measures from the beginning of the
file, 1 uses the current file position, and 2 uses the end of the file as the reference
point. from_what can be omitted and defaults to 0, using the beginning of the file as the
reference point.
>>>
In text files (those opened without a b in the mode string), only seeks relative to the
beginning of the file are allowed (the exception being seeking to the very file end
with seek(0, 2)) and the only valid offset values are those returned from
the f.tell(), or zero. Any other offset value produces undefined behaviour.
When you’re done with a file, call f.close() to close it and free up any system
resources taken up by the open file. After calling f.close(), attempts to use the file
object will automatically fail.
>>>
>>> f.close()
>>> f.read()
Traceback (most recent call last):
File "<stdin>", line 1, in ?
ValueError: I/O operation on closed file
It is good practice to use the with keyword when dealing with file objects. This has the
advantage that the file is properly closed after its suite finishes, even if an exception is
raised on the way. It is also much shorter than writing equivalent try-finally blocks:
>>>
File objects have some additional methods, such as isatty() and truncate() which
are less frequently used; consult the Library Reference for a complete guide to file
objects.
Strings can easily be written to and read from a file. Numbers take a bit more effort,
since the read() method only returns strings, which will have to be passed to a
function like int(), which takes a string like '123' and returns its numeric value 123.
When you want to save more complex data types like nested lists and dictionaries,
parsing and serializing by hand becomes complicated.
Rather than having users constantly writing and debugging code to save complicated
data types to files, Python allows you to use the popular data interchange format
called JSON (JavaScript Object Notation). The standard module called json can take
Python data hierarchies, and convert them to string representations; this process is
called serializing. Reconstructing the data from the string representation is
called deserializing. Between serializing and deserializing, the string representing the
object may have been stored in a file or data, or sent over a network connection to
some distant machine.
Note
The JSON format is commonly used by modern applications to allow for data exchange.
Many programmers are already familiar with it, which makes it a good choice for
interoperability.
If you have an object x, you can view its JSON string representation with a simple line
of code:
>>>
Another variant of the dumps() function, called dump(), simply serializes the object to
a text file. So if f is a text file object opened for writing, we can do this:
json.dump(x, f)
To decode the object again, if f is a text file object which has been opened for reading:
x = json.load(f)
This simple serialization technique can handle lists and dictionaries, but serializing
arbitrary class instances in JSON requires a bit of extra effort. The reference for
the json module contains an explanation of this.
See also
>>>
The parser repeats the offending line and displays a little ‘arrow’ pointing at the earliest
point in the line where the error was detected. The error is caused by (or at least
detected at) the token preceding the arrow: in the example, the error is detected at the
function print(), since a colon (':') is missing before it. File name and line number
are printed so you know where to look in case the input came from a script.
8.2. Exceptions
Even if a statement or expression is syntactically correct, it may cause an error when an
attempt is made to execute it. Errors detected during execution are
called exceptions and are not unconditionally fatal: you will soon learn how to handle
them in Python programs. Most exceptions are not handled by programs, however, and
result in error messages as shown here:
>>>
>>> 10 * (1/0)
Traceback (most recent call last):
File "<stdin>", line 1, in ?
ZeroDivisionError: division by zero
>>> 4 + spam*3
Traceback (most recent call last):
File "<stdin>", line 1, in ?
NameError: name 'spam' is not defined
>>> '2' + 2
Traceback (most recent call last):
File "<stdin>", line 1, in ?
TypeError: Can't convert 'int' object to str implicitly
The last line of the error message indicates what happened. Exceptions come in
different types, and the type is printed as part of the message: the types in the example
are ZeroDivisionError, NameError and TypeError. The string printed as the
exception type is the name of the built-in exception that occurred. This is true for all
built-in exceptions, but need not be true for user-defined exceptions (although it is a
useful convention). Standard exception names are built-in identifiers (not reserved
keywords).
The rest of the line provides detail based on the type of exception and what caused it.
The preceding part of the error message shows the context where the exception
happened, in the form of a stack traceback. In general it contains a stack traceback
listing source lines; however, it will not display lines read from standard input.
>>>
A try statement may have more than one except clause, to specify handlers for
different exceptions. At most one handler will be executed. Handlers only handle
exceptions that occur in the corresponding try clause, not in other handlers of the
same try statement. An except clause may name multiple exceptions as a
parenthesized tuple, for example:
The last except clause may omit the exception name(s), to serve as a wildcard. Use this
with extreme caution, since it is easy to mask a real programming error in this way! It
can also be used to print an error message and then re-raise the exception (allowing a
caller to handle the exception as well):
import sys
try:
f = open('myfile.txt')
s = f.readline()
i = int(s.strip())
except OSError as err:
print("OS error: {0}".format(err))
except ValueError:
print("Could not convert data to an integer.")
except:
print("Unexpected error:", sys.exc_info()[0])
raise
The try ... except statement has an optional else clause, which, when present, must
follow all except clauses. It is useful for code that must be executed if the try clause
does not raise an exception. For example:
The use of the else clause is better than adding additional code to the try clause
because it avoids accidentally catching an exception that wasn’t raised by the code
being protected by the try ... except statement.
When an exception occurs, it may have an associated value, also known as the
exception’s argument. The presence and type of the argument depend on the exception
type.
The except clause may specify a variable after the exception name. The variable is
bound to an exception instance with the arguments stored in instance.args. For
convenience, the exception instance defines __str__() so the arguments can be
printed directly without having to reference .args. One may also instantiate an
exception first before raising it and add any attributes to it as desired.
>>>
>>> try:
... raise Exception('spam', 'eggs')
... except Exception as inst:
... print(type(inst)) # the exception instance
... print(inst.args) # arguments stored in .args
... print(inst) # __str__ allows args to be printed
directly,
... # but may be overridden in exception
subclasses
... x, y = inst.args # unpack args
... print('x =', x)
... print('y =', y)
...
<class 'Exception'>
('spam', 'eggs')
('spam', 'eggs')
x = spam
y = eggs
If an exception has arguments, they are printed as the last part (‘detail’) of the message
for unhandled exceptions.
Exception handlers don’t just handle exceptions if they occur immediately in the try
clause, but also if they occur inside functions that are called (even indirectly) in the try
clause. For example:
>>>
>>>
The sole argument to raise indicates the exception to be raised. This must be either
an exception instance or an exception class (a class that derives from Exception).
If you need to determine whether an exception was raised but don’t intend to handle it,
a simpler form of the raise statement allows you to re-raise the exception:
>>>
>>> try:
... raise NameError('HiThere')
... except NameError:
... print('An exception flew by!')
... raise
...
An exception flew by!
Traceback (most recent call last):
File "<stdin>", line 2, in ?
NameError: HiThere
>>>
In this example, the default __init__() of Exception has been overridden. The new
behavior simply creates the value attribute. This replaces the default behavior of
creating the args attribute.
Exception classes can be defined which do anything any other class can do, but are
usually kept simple, often only offering a number of attributes that allow information
about the error to be extracted by handlers for the exception. When creating a module
that can raise several distinct errors, a common practice is to create a base class for
exceptions defined by that module, and subclass that to create specific exception
classes for different error conditions:
class Error(Exception):
"""Base class for exceptions in this module."""
pass
class InputError(Error):
"""Exception raised for errors in the input.
Attributes:
expression -- input expression in which the error occurred
message -- explanation of the error
"""
class TransitionError(Error):
"""Raised when an operation attempts a state transition that's
not
allowed.
Attributes:
previous -- state at beginning of transition
next -- attempted new state
message -- explanation of why the specific transition is
not allowed
"""
Most exceptions are defined with names that end in “Error,” similar to the naming of the
standard exceptions.
Many standard modules define their own exceptions to report errors that may occur in
functions they define. More information on classes is presented in chapter Classes.
8.6. Defining Clean-up Actions
The try statement has another optional clause which is intended to define clean-up
actions that must be executed under all circumstances. For example:
>>>
>>> try:
... raise KeyboardInterrupt
... finally:
... print('Goodbye, world!')
...
Goodbye, world!
KeyboardInterrupt
Traceback (most recent call last):
File "<stdin>", line 2, in ?
A finally clause is always executed before leaving the try statement, whether an
exception has occurred or not. When an exception has occurred in the try clause and
has not been handled by an except clause (or it has occurred in
an except or else clause), it is re-raised after the finally clause has been executed.
The finally clause is also executed “on the way out” when any other clause of
the try statement is left via a break, continue or return statement. A more
complicated example:
>>>
As you can see, the finally clause is executed in any event. The TypeError raised
by dividing two strings is not handled by the except clause and therefore re-raised after
the finally clause has been executed.
In real world applications, the finally clause is useful for releasing external resources
(such as files or network connections), regardless of whether the use of the resource
was successful.
The problem with this code is that it leaves the file open for an indeterminate amount of
time after this part of the code has finished executing. This is not an issue in simple
scripts, but can be a problem for larger applications. The with statement allows objects
like files to be used in a way that ensures they are always cleaned up promptly and
correctly.
with open("myfile.txt") as f:
for line in f:
print(line, end="")
(Lacking universally accepted terminology to talk about classes, I will make occasional
use of Smalltalk and C++ terms. I would use Modula-3 terms, since its object-oriented
semantics are closer to those of Python than C++, but I expect that few readers have
heard of it.)
By the way, I use the word attribute for any name following a dot — for example, in the
expression z.real, real is an attribute of the object z. Strictly speaking, references to
names in modules are attribute references: in the
expression modname.funcname, modname is a module object and funcname is an attribute of
it. In this case there happens to be a straightforward mapping between the module’s
attributes and the global names defined in the module: they share the same
namespace! [1]
Namespaces are created at different moments and have different lifetimes. The
namespace containing the built-in names is created when the Python interpreter starts
up, and is never deleted. The global namespace for a module is created when the
module definition is read in; normally, module namespaces also last until the interpreter
quits. The statements executed by the top-level invocation of the interpreter, either read
from a script file or interactively, are considered part of a module called __main__, so
they have their own global namespace. (The built-in names actually also live in a
module; this is called builtins.)
The local namespace for a function is created when the function is called, and deleted
when the function returns or raises an exception that is not handled within the function.
(Actually, forgetting would be a better way to describe what actually happens.) Of
course, recursive invocations each have their own local namespace.
Although scopes are determined statically, they are used dynamically. At any time
during execution, there are at least three nested scopes whose namespaces are directly
accessible:
the innermost scope, which is searched first, contains the local names
the scopes of any enclosing functions, which are searched starting with the
nearest enclosing scope, contains non-local, but also non-global names
the next-to-last scope contains the current module’s global names
the outermost scope (searched last) is the namespace containing built-in names
If a name is declared global, then all references and assignments go directly to the
middle scope containing the module’s global names. To rebind variables found outside
of the innermost scope, the nonlocal statement can be used; if not declared nonlocal,
those variable are read-only (an attempt to write to such a variable will simply create
a new local variable in the innermost scope, leaving the identically named outer variable
unchanged).
Usually, the local scope references the local names of the (textually) current function.
Outside functions, the local scope references the same namespace as the global scope:
the module’s namespace. Class definitions place yet another namespace in the local
scope.
It is important to realize that scopes are determined textually: the global scope of a
function defined in a module is that module’s namespace, no matter from where or by
what alias the function is called. On the other hand, the actual search for names is done
dynamically, at run time — however, the language definition is evolving towards static
name resolution, at “compile” time, so don’t rely on dynamic name resolution! (In fact,
local variables are already determined statically.)
The global statement can be used to indicate that particular variables live in the global
scope and should be rebound there; the nonlocal statement indicates that particular
variables live in an enclosing scope and should be rebound there.
def scope_test():
def do_local():
spam = "local spam"
def do_nonlocal():
nonlocal spam
spam = "nonlocal spam"
def do_global():
global spam
spam = "global spam"
spam = "test spam"
do_local()
print("After local assignment:", spam)
do_nonlocal()
print("After nonlocal assignment:", spam)
do_global()
print("After global assignment:", spam)
scope_test()
print("In global scope:", spam)
Note how the local assignment (which is default) didn’t change scope_test‘s binding
of spam. The nonlocal assignment changed scope_test‘s binding of spam, and
the global assignment changed the module-level binding.
You can also see that there was no previous binding for spam before
the global assignment.
class ClassName:
<statement-1>
.
.
.
<statement-N>
Class definitions, like function definitions ( def statements) must be executed before they
have any effect. (You could conceivably place a class definition in a branch of
an if statement, or inside a function.)
In practice, the statements inside a class definition will usually be function definitions,
but other statements are allowed, and sometimes useful — we’ll come back to this later.
The function definitions inside a class normally have a peculiar form of argument list,
dictated by the calling conventions for methods — again, this is explained later.
When a class definition is entered, a new namespace is created, and used as the local
scope — thus, all assignments to local variables go into this new namespace. In
particular, function definitions bind the name of the new function here.
When a class definition is left normally (via the end), a class object is created. This is
basically a wrapper around the contents of the namespace created by the class
definition; we’ll learn more about class objects in the next section. The original local
scope (the one in effect just before the class definition was entered) is reinstated, and
the class object is bound here to the class name given in the class definition header
(ClassName in the example).
Attribute references use the standard syntax used for all attribute references in
Python: obj.name. Valid attribute names are all the names that were in the class’s
namespace when the class object was created. So, if the class definition looked like
this:
class MyClass:
"""A simple example class"""
i = 12345
def f(self):
return 'hello world'
then MyClass.i and MyClass.f are valid attribute references, returning an integer and a
function object, respectively. Class attributes can also be assigned to, so you can
change the value of MyClass.i by assignment. __doc__ is also a valid attribute, returning
the docstring belonging to the class: "A simple example class".
Class instantiation uses function notation. Just pretend that the class object is a
parameterless function that returns a new instance of the class. For example (assuming
the above class):
x = MyClass()
creates a new instance of the class and assigns this object to the local variable x.
The instantiation operation (“calling” a class object) creates an empty object. Many
classes like to create objects with instances customized to a specific initial state.
Therefore a class may define a special method named __init__(), like this:
def __init__(self):
self.data = []
x = MyClass()
Of course, the __init__() method may have arguments for greater flexibility. In that case,
arguments given to the class instantiation operator are passed on to __init__(). For
example,
>>>
>>> class Complex:
... def __init__(self, realpart, imagpart):
... self.r = realpart
... self.i = imagpart
...
>>> x = Complex(3.0, -4.5)
>>> x.r, x.i
(3.0, -4.5)
x.counter = 1
while x.counter < 10:
x.counter = x.counter * 2
print(x.counter)
del x.counter
The other kind of instance attribute reference is a method. A method is a function that
“belongs to” an object. (In Python, the term method is not unique to class instances:
other object types can have methods as well. For example, list objects have methods
called append, insert, remove, sort, and so on. However, in the following discussion,
we’ll use the term method exclusively to mean methods of class instance objects,
unless explicitly stated otherwise.)
Valid method names of an instance object depend on its class. By definition, all
attributes of a class that are function objects define corresponding methods of its
instances. So in our example, x.f is a valid method reference, since MyClass.f is a
function, but x.i is not, since MyClass.i is not. But x.f is not the same thing as MyClass.f —
it is a method object, not a function object.
x.f()
In the MyClass example, this will return the string 'hello world'. However, it is not
necessary to call a method right away: x.f is a method object, and can be stored away
and called at a later time. For example:
xf = x.f
while True:
print(xf())
What exactly happens when a method is called? You may have noticed that x.f() was
called without an argument above, even though the function definition for f() specified
an argument. What happened to the argument? Surely Python raises an exception
when a function that requires an argument is called without any — even if the argument
isn’t actually used...
Actually, you may have guessed the answer: the special thing about methods is that the
object is passed as the first argument of the function. In our example, the call x.f() is
exactly equivalent to MyClass.f(x). In general, calling a method with a list of n arguments
is equivalent to calling the corresponding function with an argument list that is created
by inserting the method’s object before the first argument.
If you still don’t understand how methods work, a look at the implementation can
perhaps clarify matters. When an instance attribute is referenced that isn’t a data
attribute, its class is searched. If the name denotes a valid class attribute that is a
function object, a method object is created by packing (pointers to) the instance object
and the function object just found together in an abstract object: this is the method
object. When the method object is called with an argument list, a new argument list is
constructed from the instance object and the argument list, and the function object is
called with this new argument list.
class Dog:
>>> d = Dog('Fido')
>>> e = Dog('Buddy')
>>> d.kind # shared by all dogs
'canine'
>>> e.kind # shared by all dogs
'canine'
>>> d.name # unique to d
'Fido'
>>> e.name # unique to e
'Buddy'
As discussed in A Word About Names and Objects, shared data can have possibly
surprising effects with involving mutable objects such as lists and dictionaries. For
example, the tricks list in the following code should not be used as a class variable
because just a single list would be shared by all Dog instances:
class Dog:
>>> d = Dog('Fido')
>>> e = Dog('Buddy')
>>> d.add_trick('roll over')
>>> e.add_trick('play dead')
>>> d.tricks # unexpectedly shared by all dogs
['roll over', 'play dead']
class Dog:
>>> d = Dog('Fido')
>>> e = Dog('Buddy')
>>> d.add_trick('roll over')
>>> e.add_trick('play dead')
>>> d.tricks
['roll over']
>>> e.tricks
['play dead']
Clients should use data attributes with care — clients may mess up invariants
maintained by the methods by stamping on their data attributes. Note that clients may
add data attributes of their own to an instance object without affecting the validity of the
methods, as long as name conflicts are avoided — again, a naming convention can
save a lot of headaches here.
There is no shorthand for referencing data attributes (or other methods!) from within
methods. I find that this actually increases the readability of methods: there is no
chance of confusing local variables and instance variables when glancing through a
method.
Often, the first argument of a method is called self. This is nothing more than a
convention: the name self has absolutely no special meaning to Python. Note, however,
that by not following the convention your code may be less readable to other Python
programmers, and it is also conceivable that a class browser program might be written
that relies upon such a convention.
Any function object that is a class attribute defines a method for instances of that class.
It is not necessary that the function definition is textually enclosed in the class definition:
assigning a function object to a local variable in the class is also ok. For example:
class C:
f = f1
def g(self):
return 'hello world'
h = g
Now f, g and h are all attributes of class C that refer to function objects, and
consequently they are all methods of instances of C — h being exactly equivalent to g.
Note that this practice usually only serves to confuse the reader of a program.
Methods may call other methods by using method attributes of the self argument:
class Bag:
def __init__(self):
self.data = []
def add(self, x):
self.data.append(x)
def addtwice(self, x):
self.add(x)
self.add(x)
Methods may reference global names in the same way as ordinary functions. The global
scope associated with a method is the module containing its definition. (A class is never
used as a global scope.) While one rarely encounters a good reason for using global
data in a method, there are many legitimate uses of the global scope: for one thing,
functions and modules imported into the global scope can be used by methods, as well
as functions and classes defined in it. Usually, the class containing the method is itself
defined in this global scope, and in the next section we’ll find some good reasons why a
method would want to reference its own class.
Each value is an object, and therefore has a class (also called its type). It is stored
as object.__class__.
9.5. Inheritance
Of course, a language feature would not be worthy of the name “class” without
supporting inheritance. The syntax for a derived class definition looks like this:
class DerivedClassName(BaseClassName):
<statement-1>
.
.
.
<statement-N>
The name BaseClassName must be defined in a scope containing the derived class
definition. In place of a base class name, other arbitrary expressions are also allowed.
This can be useful, for example, when the base class is defined in another module:
class DerivedClassName(modname.BaseClassName):
Execution of a derived class definition proceeds the same as for a base class. When the
class object is constructed, the base class is remembered. This is used for resolving
attribute references: if a requested attribute is not found in the class, the search
proceeds to look in the base class. This rule is applied recursively if the base class itself
is derived from some other class.
Derived classes may override methods of their base classes. Because methods have
no special privileges when calling other methods of the same object, a method of a
base class that calls another method defined in the same base class may end up calling
a method of a derived class that overrides it. (For C++ programmers: all methods in
Python are effectively virtual.)
An overriding method in a derived class may in fact want to extend rather than simply
replace the base class method of the same name. There is a simple way to call the
base class method directly: just call BaseClassName.methodname(self, arguments). This is
occasionally useful to clients as well. (Note that this only works if the base class is
accessible as BaseClassName in the global scope.)
Use isinstance() to check an instance’s type: isinstance(obj, int) will be True only
if obj.__class__ is int or some class derived from int.
Use issubclass() to check class inheritance: issubclass(bool, int) is True since bool is
a subclass of int. However, issubclass(float, int) is False since float is not a subclass
of int.
For most purposes, in the simplest cases, you can think of the search for attributes
inherited from a parent class as depth-first, left-to-right, not searching twice in the same
class where there is an overlap in the hierarchy. Thus, if an attribute is not found
in DerivedClassName, it is searched for in Base1, then (recursively) in the base classes
of Base1, and if it was not found there, it was searched for in Base2, and so on.
In fact, it is slightly more complex than that; the method resolution order changes
dynamically to support cooperative calls to super(). This approach is known in some
other multiple-inheritance languages as call-next-method and is more powerful than the
super call found in single-inheritance languages.
Dynamic ordering is necessary because all cases of multiple inheritance exhibit one or
more diamond relationships (where at least one of the parent classes can be accessed
through multiple paths from the bottommost class). For example, all classes inherit
from object, so any case of multiple inheritance provides more than one path to
reach object. To keep the base classes from being accessed more than once, the
dynamic algorithm linearizes the search order in a way that preserves the left-to-right
ordering specified in each class, that calls each parent only once, and that is monotonic
(meaning that a class can be subclassed without affecting the precedence order of its
parents). Taken together, these properties make it possible to design reliable and
extensible classes with multiple inheritance. For more detail,
see https://www.python.org/download/releases/2.3/mro/.
Since there is a valid use-case for class-private members (namely to avoid name
clashes of names with names defined by subclasses), there is limited support for such a
mechanism, called name mangling. Any identifier of the form __spam (at least two
leading underscores, at most one trailing underscore) is textually replaced
with _classname__spam, where classname is the current class name with leading
underscore(s) stripped. This mangling is done without regard to the syntactic position of
the identifier, as long as it occurs within the definition of a class.
Name mangling is helpful for letting subclasses override methods without breaking
intraclass method calls. For example:
class Mapping:
def __init__(self, iterable):
self.items_list = []
self.__update(iterable)
class MappingSubclass(Mapping):
Note that the mangling rules are designed mostly to avoid accidents; it still is possible to
access or modify a variable that is considered private. This can even be useful in
special circumstances, such as in the debugger.
Notice that code passed to exec() or eval() does not consider the classname of the
invoking class to be the current class; this is similar to the effect of the global statement,
the effect of which is likewise restricted to code that is byte-compiled together. The
same restriction applies to getattr(), setattr() and delattr(), as well as when
referencing __dict__ directly.
class Employee:
pass
A piece of Python code that expects a particular abstract data type can often be passed
a class that emulates the methods of that data type instead. For instance, if you have a
function that formats some data from a file object, you can define a class with
methods read() and readline() that get the data from a string buffer instead, and pass it as
an argument.
Instance method objects have attributes, too: m.__self__ is the instance object with the
method m(), and m.__func__ is the function object corresponding to the method.
There are two new valid (semantic) forms for the raise statement:
raise Class
raise Instance
In the first form, Class must be an instance of type or of a class derived from it. The first
form is a shorthand for:
raise Class()
class B(Exception):
pass
class C(B):
pass
class D(C):
pass
Note that if the except clauses were reversed (with except B first), it would have printed
B, B, B — the first matching except clause is triggered.
When an error message is printed for an unhandled exception, the exception’s class
name is printed, then a colon and a space, and finally the instance converted to a string
using the built-in function str().
9.9. Iterators
By now you have probably noticed that most container objects can be looped over using
a for statement:
This style of access is clear, concise, and convenient. The use of iterators pervades and
unifies Python. Behind the scenes, the for statement calls iter() on the container object.
The function returns an iterator object that defines the method __next__() which
accesses elements in the container one at a time. When there are no more
elements, __next__() raises a StopIteration exception which tells the for loop to terminate.
You can call the __next__() method using the next() built-in function; this example shows
how it all works:
>>>
>>> s = 'abc'
>>> it = iter(s)
>>> it
<iterator object at 0x00A1DB50>
>>> next(it)
'a'
>>> next(it)
'b'
>>> next(it)
'c'
>>> next(it)
Traceback (most recent call last):
File "<stdin>", line 1, in ?
next(it)
StopIteration
Having seen the mechanics behind the iterator protocol, it is easy to add iterator
behavior to your classes. Define an __iter__() method which returns an object with
a __next__() method. If the class defines __next__(), then __iter__() can just return self:
class Reverse:
"""Iterator for looping over a sequence backwards."""
def __init__(self, data):
self.data = data
self.index = len(data)
def __iter__(self):
return self
def __next__(self):
if self.index == 0:
raise StopIteration
self.index = self.index - 1
return self.data[self.index]
>>>
>>> rev = Reverse('spam')
>>> iter(rev)
<__main__.Reverse object at 0x00A1DB50>
>>> for char in rev:
... print(char)
...
m
a
p
s
9.10. Generators
Generators are a simple and powerful tool for creating iterators. They are written like
regular functions but use the yield statement whenever they want to return data. Each
time next() is called on it, the generator resumes where it left off (it remembers all the
data values and which statement was last executed). An example shows that
generators can be trivially easy to create:
def reverse(data):
for index in range(len(data)-1, -1, -1):
yield data[index]
>>>
>>> for char in reverse('golf'):
... print(char)
...
f
l
o
g
Anything that can be done with generators can also be done with class-based iterators
as described in the previous section. What makes generators so compact is that
the __iter__() and __next__() methods are created automatically.
Another key feature is that the local variables and execution state are automatically
saved between calls. This made the function easier to write and much more clear than
an approach using instance variables like self.index and self.data.
In addition to automatic method creation and saving program state, when generators
terminate, they automatically raise StopIteration. In combination, these features make it
easy to create iterators with no more effort than writing a regular function.
Examples:
>>>
>>> sum(i*i for i in range(10)) # sum of squares
285
Footnotes
Except for one thing. Module objects have a secret read-only attribute
called __dict__ which returns the dictionary used to implement the module’s namespace;
[1] the name __dict__ is an attribute but not a global name. Obviously, using this violates the
abstraction of namespace implementation, and should be restricted to things like post-
mortem debuggers.
>>>
>>> import os
>>> os.getcwd() # Return the current working directory
'C:\\Python34'
>>> os.chdir('/server/accesslogs') # Change current working
directory
>>> os.system('mkdir today') # Run the command mkdir in the
system shell
0
Be sure to use the import os style instead of from os import *. This will
keep os.open() from shadowing the built-in open() function which operates much
differently.
The built-in dir() and help() functions are useful as interactive aids for working with large
modules like os:
>>>
>>> import os
>>> dir(os)
<returns a list of all module functions>
>>> help(os)
<returns an extensive manual page created from the module's
docstrings>
For daily file and directory management tasks, the shutil module provides a higher level
interface that is easier to use:
>>>
>>>
>>>
>>>
>>> import re
>>> re.findall(r'\bf[a-z]*', 'which foot or hand fell fastest')
['foot', 'fell', 'fastest']
>>> re.sub(r'(\b[a-z]+) \1', r'\1', 'cat in the the hat')
'cat in the hat'
When only simple capabilities are needed, string methods are preferred because they
are easier to read and debug:
>>>
10.6. Mathematics
The math module gives access to the underlying C library functions for floating point
math:
>>>
>>>
The statistics module calculates basic statistical properties (the mean, median, variance,
etc.) of numeric data:
>>>
The SciPy project <http://scipy.org> has many other modules for numerical
computations.
>>>
>>>
>>>
For example, it may be tempting to use the tuple packing and unpacking feature instead
of the traditional approach to swapping arguments. The timeit module quickly
demonstrates a modest performance advantage:
>>>
In contrast to timeit‘s fine level of granularity, the profile and pstats modules provide tools
for identifying time critical sections in larger blocks of code.
10.11. Quality Control
One approach for developing high quality software is to write tests for each function as it
is developed and to run those tests frequently during the development process.
The doctest module provides a tool for scanning a module and validating tests
embedded in a program’s docstrings. Test construction is as simple as cutting-and-
pasting a typical call along with its results into the docstring. This improves the
documentation by providing the user with an example and it allows the doctest module
to make sure the code remains true to the documentation:
def average(values):
"""Computes the arithmetic mean of a list of numbers.
import doctest
doctest.testmod() # automatically validate the embedded tests
The unittest module is not as effortless as the doctest module, but it allows a more
comprehensive set of tests to be maintained in a separate file:
import unittest
class TestStatisticalFunctions(unittest.TestCase):
def test_average(self):
self.assertEqual(average([20, 30, 70]), 40.0)
self.assertEqual(round(average([1, 5, 7]), 1), 4.3)
with self.assertRaises(ZeroDivisionError):
average([])
with self.assertRaises(TypeError):
average(20, 30, 70)
The xmlrpc.client and xmlrpc.server modules make implementing remote procedure calls
into an almost trivial task. Despite the modules names, no direct knowledge or handling
of XML is needed.
The email package is a library for managing email messages, including MIME and other
RFC 2822-based message documents. Unlike smtplib and poplib which actually send
and receive messages, the email package has a complete toolset for building or
decoding complex message structures (including attachments) and for implementing
internet encoding and header protocols.
The json package provides robust support for parsing this popular data interchange
format. The csv module supports direct reading and writing of files in Comma-Separated
Value format, commonly supported by databases and spreadsheets. XML processing is
supported by the xml.etree.ElementTree, xml.dom and xml.sax packages. Together,
these modules and packages greatly simplify data interchange between Python
applications and other tools.
The sqlite3 module is a wrapper for the SQLite database library, providing a persistent
database that can be updated and accessed using slightly nonstandard SQL syntax.
Internationalization is supported by a number of modules including gettext, locale, and
the codecs package.
>>>
The pprint module offers more sophisticated control over printing both built-in and user
defined objects in a way that is readable by the interpreter. When the result is longer
than one line, the “pretty printer” adds line breaks and indentation to more clearly reveal
data structure:
>>>
The textwrap module formats paragraphs of text to fit a given screen width:
>>>
The locale module accesses a database of culture specific data formats. The grouping
attribute of locale’s format function provides a direct way of formatting numbers with
group separators:
>>>
>>> import locale
>>> locale.setlocale(locale.LC_ALL, 'English_United States.1252')
'English_United States.1252'
>>> conv = locale.localeconv() # get a mapping of
conventions
>>> x = 1234567.8
>>> locale.format("%d", x, grouping=True)
'1,234,567'
>>> locale.format_string("%s%.*f", (conv['currency_symbol'],
... conv['frac_digits'], x), grouping=True)
'$1,234,567.80'
11.2. Templating
The string module includes a versatile Template class with a simplified syntax suitable for
editing by end-users. This allows users to customize their applications without having to
alter the application.
The format uses placeholder names formed by $ with valid Python identifiers
(alphanumeric characters and underscores). Surrounding the placeholder with braces
allows it to be followed by more alphanumeric letters with no intervening spaces.
Writing $$ creates a single escaped $:
>>>
>>>
Template subclasses can specify a custom delimiter. For example, a batch renaming
utility for a photo browser may elect to use percent signs for placeholders such as the
current date, image sequence number, or file format:
>>>
>>> t = BatchRename(fmt)
>>> date = time.strftime('%d%b%y')
>>> for i, filename in enumerate(photofiles):
... base, ext = os.path.splitext(filename)
... newname = t.substitute(d=date, n=i, f=ext)
... print('{0} --> {1}'.format(filename, newname))
Another application for templating is separating program logic from the details of
multiple output formats. This makes it possible to substitute custom templates for XML
files, plain text reports, and HTML web reports.
import struct
with open('myfile.zip', 'rb') as f:
data = f.read()
start = 0
for i in range(3): # show the first 3 file
headers
start += 14
fields = struct.unpack('<IIIHH', data[start:start+16])
crc32, comp_size, uncomp_size, filenamesize, extra_size =
fields
start += 16
filename = data[start:start+filenamesize]
start += filenamesize
extra = data[start:start+extra_size]
print(filename, hex(crc32), comp_size, uncomp_size)
11.4. Multi-threading
Threading is a technique for decoupling tasks which are not sequentially dependent.
Threads can be used to improve the responsiveness of applications that accept user
input while other tasks run in the background. A related use case is running I/O in
parallel with computations in another thread.
The following code shows how the high level threading module can run tasks in
background while the main program continues to run:
class AsyncZip(threading.Thread):
def __init__(self, infile, outfile):
threading.Thread.__init__(self)
self.infile = infile
self.outfile = outfile
def run(self):
f = zipfile.ZipFile(self.outfile, 'w',
zipfile.ZIP_DEFLATED)
f.write(self.infile)
f.close()
print('Finished background zip of:', self.infile)
background = AsyncZip('mydata.txt', 'myarchive.zip')
background.start()
print('The main program continues to run in foreground.')
While those tools are powerful, minor design errors can result in problems that are
difficult to reproduce. So, the preferred approach to task coordination is to concentrate
all access to a resource in a single thread and then use the queue module to feed that
thread with requests from other threads. Applications using Queue objects for inter-
thread communication and coordination are easier to design, more readable, and more
reliable.
11.5. Logging
The logging module offers a full featured and flexible logging system. At its simplest, log
messages are sent to a file or to sys.stderr:
import logging
logging.debug('Debugging information')
logging.info('Informational message')
logging.warning('Warning:config file %s not found', 'server.conf')
logging.error('Error occurred')
logging.critical('Critical error -- shutting down')
By default, informational and debugging messages are suppressed and the output is
sent to standard error. Other output options include routing messages through email,
datagrams, sockets, or to an HTTP Server. New filters can select different routing based
on message priority: DEBUG, INFO, WARNING, ERROR, and CRITICAL.
The logging system can be configured directly from Python or can be loaded from a
user editable configuration file for customized logging without altering the application.
This approach works fine for most applications but occasionally there is a need to track
objects only as long as they are being used by something else. Unfortunately, just
tracking them creates a reference that makes them permanent. The weakref module
provides tools for tracking objects without creating a reference. When the object is no
longer needed, it is automatically removed from a weakref table and a callback is
triggered for weakref objects. Typical applications include caching objects that are
expensive to create:
>>>
The array module provides an array() object that is like a list that stores only
homogeneous data and stores it more compactly. The following example shows an
array of numbers stored as two byte unsigned binary numbers (typecode "H") rather
than the usual 16 bytes per entry for regular lists of Python int objects:
>>>
The collections module provides a deque() object that is like a list with faster appends and
pops from the left side but slower lookups in the middle. These objects are well suited
for implementing queues and breadth first tree searches:
>>>
In addition to alternative list implementations, the library also offers other tools such as
the bisect module with functions for manipulating sorted lists:
>>>
>>> import bisect
>>> scores = [(100, 'perl'), (200, 'tcl'), (400, 'lua'), (500,
'python')]
>>> bisect.insort(scores, (300, 'ruby'))
>>> scores
[(100, 'perl'), (200, 'tcl'), (300, 'ruby'), (400, 'lua'), (500,
'python')]
The heapq module provides functions for implementing heaps based on regular lists. The
lowest valued entry is always kept at position zero. This is useful for applications which
repeatedly access the smallest element but do not want to run a full list sort:
>>>
financial applications and other uses which require exact decimal representation,
control over precision,
control over rounding to meet legal or regulatory requirements,
tracking of significant decimal places, or
applications where the user expects the results to match calculations done by hand.
For example, calculating a 5% tax on a 70 cent phone charge gives different results in
decimal floating point and binary floating point. The difference becomes significant if the
results are rounded to the nearest cent:
>>>
The Decimal result keeps a trailing zero, automatically inferring four place significance
from multiplicands with two place significance. Decimal reproduces mathematics as
done by hand and avoids issues that can arise when binary floating point cannot exactly
represent decimal quantities.
Exact representation enables the Decimal class to perform modulo calculations and
equality tests that are unsuitable for binary floating point:
>>>
>>>
>>> getcontext().prec = 36
>>> Decimal(1) / Decimal(7)
Decimal('0.142857142857142857142857142857142857')
The solution for this problem is to create a virtual environment (often shortened to
“virtualenv”), a self-contained directory tree that contains a Python installation for a
particular version of Python, plus a number of additional packages.
Different applications can then use different virtual environments. To resolve the earlier
example of conflicting requirements, application A can have its own virtual environment
with version 1.0 installed while application B has another virtualenv with version 2.0. If
application B requires a library be upgraded to version 3.0, this will not affect application
A’s environment.
To create a virtualenv, decide upon a directory where you want to place it and
run pyvenv with the directory path:
pyvenv tutorial-env
This will create the tutorial-env directory if it doesn’t exist, and also create
directories inside it containing a copy of the Python interpreter, the standard library, and
various supporting files.
On Windows, run:
tutorial-env/Scripts/activate
(This script is written for the bash shell. If you use the csh or fish shells, there are
alternate activate.csh and activate.fish scripts you should use instead.)
Activating the virtualenv will change your shell’s prompt to show what virtualenv you’re
using, and modify the environment so that running python will get you that particular
version and installation of Python. For example:
You can also install a specific version of a package by giving the package name
followed by == and the version number:
If you re-run this command, pip will notice that the requested version is already
installed and do nothing. You can supply a different version number to get that version,
or you can run pip install --upgrade to upgrade the package to the latest version:
pip uninstall followed by one or more package names will remove the packages
from the virtual environment.
pip list will display all of the packages installed in the virtual environment:
pip freeze will produce a similar list of the installed packages, but the output uses the
format that pip install expects. A common convention is to put this list in
a requirements.txt file:
The requirements.txt can then be committed to version control and shipped as part
of an application. Users can then install all the necessary packages with install -r:
This tutorial is part of Python’s documentation set. Some other documents in the set
are:
You should browse through this manual, which gives complete (though terse)
reference material about types, functions, and the modules in the standard
library. The standard Python distribution includes a lot of additional code. There
are modules to read Unix mailboxes, retrieve documents via HTTP, generate
random numbers, parse command-line options, write CGI programs, compress
data, and many other tasks. Skimming through the Library Reference will give
you an idea of what’s available.
For Python-related questions and problem reports, you can post to the
newsgroup comp.lang.python, or send them to the mailing list at python-
[email protected]. The newsgroup and mailing list are gatewayed, so messages posted to
one will automatically be forwarded to the other. There are hundreds of postings a day,
asking (and answering) questions, suggesting new features, and announcing new
modules. Mailing list archives are available at https://mail.python.org/pipermail/.
Before posting, be sure to check the list of Frequently Asked Questions (also called the
FAQ). The FAQ answers many of the questions that come up again and again, and may
already contain the solution for your problem.
One alternative enhanced interactive interpreter that has been around for quite some
time is IPython, which features tab completion, object exploration and advanced history
management. It can also be thoroughly customized and embedded into other
applications. Another similar enhanced interactive environment is bpython.
0.125
has value 1/10 + 2/100 + 5/1000, and in the same way the binary fraction
0.001
has value 0/2 + 0/4 + 1/8. These two fractions have identical values, the only real
difference being that the first is written in base 10 fractional notation, and the second in
base 2.
Unfortunately, most decimal fractions cannot be represented exactly as binary fractions.
A consequence is that, in general, the decimal floating-point numbers you enter are only
approximated by the binary floating-point numbers actually stored in the machine.
The problem is easier to understand at first in base 10. Consider the fraction 1/3. You
can approximate that as a base 10 fraction:
0.3
or, better,
0.33
or, better,
0.333
and so on. No matter how many digits you’re willing to write down, the result will never
be exactly 1/3, but will be an increasingly better approximation of 1/3.
In the same way, no matter how many base 2 digits you’re willing to use, the decimal
value 0.1 cannot be represented exactly as a base 2 fraction. In base 2, 1/10 is the
infinitely repeating fraction
0.0001100110011001100110011001100110011001100110011...
Stop at any finite number of bits, and you get an approximation. On most machines
today, floats are approximated using a binary fraction with the numerator using the first
53 bits starting with the most significant bit and with the denominator as a power of two.
In the case of 1/10, the binary fraction is 3602879701896397 / 2 ** 55 which is close to
but not exactly equal to the true value of 1/10.
Many users are not aware of the approximation because of the way values are
displayed. Python only prints a decimal approximation to the true decimal value of the
binary approximation stored by the machine. On most machines, if Python were to print
the true decimal value of the binary approximation stored for 0.1, it would have to
display
>>>
>>> 0.1
0.1000000000000000055511151231257827021181583404541015625
That is more digits than most people find useful, so Python keeps the number of digits
manageable by displaying a rounded value instead
>>>
>>> 1 / 10
0.1
Just remember, even though the printed result looks like the exact value of 1/10, the
actual stored value is the nearest representable binary fraction.
Interestingly, there are many different decimal numbers that share the same nearest
approximate binary fraction. For example, the
numbers 0.1 and 0.10000000000000001 and 0.1000000000000000055511151231257827021181
583404541015625 are all approximated by 3602879701896397 / 2 ** 55. Since all of these
decimal values share the same approximation, any one of them could be displayed
while still preserving the invariant eval(repr(x)) == x.
Historically, the Python prompt and built-in repr() function would choose the one with 17
significant digits, 0.10000000000000001. Starting with Python 3.1, Python (on most
systems) is now able to choose the shortest of these and simply display 0.1.
Note that this is in the very nature of binary floating-point: this is not a bug in Python,
and it is not a bug in your code either. You’ll see the same kind of thing in all languages
that support your hardware’s floating-point arithmetic (although some languages may
not display the difference by default, or in all output modes).
For more pleasant output, you may wish to use string formatting to produce a limited
number of significant digits:
>>>
It’s important to realize that this is, in a real sense, an illusion: you’re simply rounding
the display of the true machine value.
One illusion may beget another. For example, since 0.1 is not exactly 1/10, summing
three values of 0.1 may not yield exactly 0.3, either:
>>>
>>> .1 + .1 + .1 == .3
False
Also, since the 0.1 cannot get any closer to the exact value of 1/10 and 0.3 cannot get
any closer to the exact value of 3/10, then pre-rounding with round() function cannot
help:
>>>
Though the numbers cannot be made closer to their intended exact values,
the round() function can be useful for post-rounding so that results with inexact values
become comparable to one another:
>>>
Binary floating-point arithmetic holds many surprises like this. The problem with “0.1” is
explained in precise detail below, in the “Representation Error” section. See The Perils
of Floating Point for a more complete account of other common surprises.
As that says near the end, “there are no easy answers.” Still, don’t be unduly wary of
floating-point! The errors in Python float operations are inherited from the floating-point
hardware, and on most machines are on the order of no more than 1 part in 2**53 per
operation. That’s more than adequate for most tasks, but you do need to keep in mind
that it’s not decimal arithmetic and that every float operation can suffer a new rounding
error.
While pathological cases do exist, for most casual use of floating-point arithmetic you’ll
see the result you expect in the end if you simply round the display of your final results
to the number of decimal digits you expect. str() usually suffices, and for finer control
see the str.format() method’s format specifiers in Format String Syntax.
For use cases which require exact decimal representation, try using the decimal module
which implements decimal arithmetic suitable for accounting applications and high-
precision applications.
Another form of exact arithmetic is supported by the fractions module which implements
arithmetic based on rational numbers (so the numbers like 1/3 can be represented
exactly).
If you are a heavy user of floating point operations you should take a look at the
Numerical Python package and many other packages for mathematical and statistical
operations supplied by the SciPy project. See <http://scipy.org>.
Python provides tools that may help on those rare occasions when you really do want to
know the exact value of a float. The float.as_integer_ratio() method expresses the value of
a float as a fraction:
>>>
>>> x = 3.14159
>>> x.as_integer_ratio()
(3537115888337719, 1125899906842624)
Since the ratio is exact, it can be used to losslessly recreate the original value:
>>>
The float.hex() method expresses a float in hexadecimal (base 16), again giving the
exact value stored by your computer:
>>>
>>> x.hex()
'0x1.921f9f01b866ep+1'
This precise hexadecimal representation can be used to reconstruct the float value
exactly:
>>>
>>> x == float.fromhex('0x1.921f9f01b866ep+1')
True
Since the representation is exact, it is useful for reliably porting values across different
versions of Python (platform independence) and exchanging data with other languages
that support the same format (such as Java and C99).
Another helpful tool is the math.fsum() function which helps mitigate loss-of-precision
during summation. It tracks “lost digits” as values are added onto a running total. That
can make a difference in overall accuracy so that the errors do not accumulate to the
point where they affect the final total:
>>>
Representation error refers to the fact that some (most, actually) decimal fractions
cannot be represented exactly as binary (base 2) fractions. This is the chief reason why
Python (or Perl, C, C++, Java, Fortran, and many others) often won’t display the exact
decimal number you expect.
Why is that? 1/10 is not exactly representable as a binary fraction. Almost all machines
today (November 2000) use IEEE-754 floating point arithmetic, and almost all platforms
map Python floats to IEEE-754 “double precision”. 754 doubles contain 53 bits of
precision, so on input the computer strives to convert 0.1 to the closest fraction it can of
the form J/2**N where J is an integer containing exactly 53 bits. Rewriting
1 / 10 ~= J / (2**N)
as
J ~= 2**N / 10
and recalling that J has exactly 53 bits (is >= 2**52 but < 2**53), the best value for N is
56:
>>>
That is, 56 is the only value for N that leaves J with exactly 53 bits. The best possible
value for J is then that quotient rounded:
>>>
Since the remainder is more than half of 10, the best approximation is obtained by
rounding up:
>>>
>>> q+1
7205759403792794
Therefore the best possible approximation to 1/10 in 754 double precision is:
7205759403792794 / 2 ** 56
Dividing both the numerator and denominator by two reduces the fraction to:
3602879701896397 / 2 ** 55
Note that since we rounded up, this is actually a little bit larger than 1/10; if we had not
rounded up, the quotient would have been a little bit smaller than 1/10. But in no case
can it be exactly 1/10!
So the computer never “sees” 1/10: what it sees is the exact fraction given above, the
best 754 double approximation it can get:
>>>
>>> 0.1 * 2 ** 55
3602879701896397.0
If we multiply that fraction by 10**55, we can see the value out to 55 decimal digits:
>>>
>>> 3602879701896397 * 10 ** 55 // 2 ** 55
1000000000000000055511151231257827021181583404541015625
meaning that the exact number stored in the computer is equal to the decimal value
0.1000000000000000055511151231257827021181583404541015625. Instead of
displaying the full decimal value, many languages (including older versions of Python),
round the result to 17 significant digits:
>>>
>>>
>>> Fraction.from_float(0.1)
Fraction(3602879701896397, 36028797018963968)
>>> (0.1).as_integer_ratio()
(3602879701896397, 36028797018963968)
>>> Decimal.from_float(0.1)
Decimal('0.1000000000000000055511151231257827021181583404541015625'
)
16. Appendix
16.1. Interactive Mode
16.1.1. Error Handling
When an error occurs, the interpreter prints an error message and a stack trace. In
interactive mode, it then returns to the primary prompt; when input came from a file, it
exits with a nonzero exit status after printing the stack trace. (Exceptions handled by
an except clause in a try statement are not errors in this context.) Some errors are
unconditionally fatal and cause an exit with a nonzero exit; this applies to internal
inconsistencies and some cases of running out of memory. All error messages are
written to the standard error stream; normal output from executed commands is written
to standard output.
#!/usr/bin/env python3.4
(assuming that the interpreter is on the user’s PATH) at the beginning of the script and
giving the file an executable mode. The #! must be the first two characters of the file.
On some platforms, this first line must end with a Unix-style line ending ( '\n'), not a
Windows ('\r\n') line ending. Note that the hash, or pound, character, '#', is used to
start a comment in Python.
The script can be given an executable mode, or permission, using
the chmod command.
$ chmod +x myscript.py
This file is only read in interactive sessions, not when Python reads commands from a
script, and not when /dev/tty is given as the explicit source of commands (which
otherwise behaves like an interactive session). It is executed in the same namespace
where interactive commands are executed, so that objects that it defines or imports can
be used without qualification in the interactive session. You can also change the
prompts sys.ps1 and sys.ps2 in this file.
If you want to read an additional start-up file from the current directory, you can program
this in the global start-up file using code
like if os.path.isfile('.pythonrc.py'): exec(open('.pythonrc.py').rea
d()). If you want to use the startup file in a script, you must do this explicitly in the
script:
import os
filename = os.environ.get('PYTHONSTARTUP')
if filename and os.path.isfile(filename):
with open(filename) as fobj:
startup_file = fobj.read()
exec(startup_file)
>>>
>>> import site
>>> site.getusersitepackages()
'/home/user/.local/lib/python3.4/site-packages'
Now you can create a file named usercustomize.py in that directory and put anything
you want in it. It will affect every invocation of Python, unless it is started with the -
s option to disable the automatic import.
Footnotes
[1
A problem with the GNU Readline package may prevent this.
]
The most common use case is, of course, a simple invocation of a script:
python myscript.py
When called with standard input connected to a tty device, it prompts for commands and
executes them until an EOF (an end-of-file character, you can produce that with Ctrl-
D on UNIX or Ctrl-Z, Enter on Windows) is read.
When called with a file name argument or with a file as standard input, it reads and
executes a script from that file.
When called with a directory name argument, it reads and executes an appropriately
named script from that directory.
When called with -c command, it executes the Python statement(s) given
as command. Here command may contain multiple statements separated by newlines.
Leading whitespace is significant in Python statements!
When called with -m module-name, the given module is located on the Python
module path and executed as a script.
In non-interactive mode, the entire input is parsed before it is executed.
An interface option terminates the list of options consumed by the interpreter, all
consecutive arguments will end up in sys.argv – note that the first element, subscript
zero (sys.argv[0]), is a string reflecting the program’s source.
-c <command>
Execute the Python code in command. command can be one or more statements
separated by newlines, with significant leading whitespace as in normal module
code.
If this option is given, the first element of sys.argv will be "-c" and the current
directory will be added to the start of sys.path (allowing modules in that
directory to be imported as top level modules).
-m <module-name>
Search sys.path for the named module and execute its contents as
the __main__ module.
Since the argument is a module name, you must not give a file extension ( .py).
The module-name should be a valid Python module name, but the
implementation may not always enforce this (e.g. it may allow you to use a name
that includes a hyphen).
Note
This option cannot be used with built-in modules and extension modules written
in C, since they do not have Python module files. However, it can still be used for
precompiled modules, even if the original source file is not available.
If this option is given, the first element of sys.argv will be the full path to the
module file (while the module file is being located, the first element will be set
to "-m"). As with the -c option, the current directory will be added to the start
of sys.path.
Many standard library modules contain code that is invoked on their execution as
a script. An example is the timeit module:
-
Read commands from standard input ( sys.stdin). If standard input is a
terminal, -i is implied.
If this option is given, the first element of sys.argv will be "-" and the current
directory will be added to the start of sys.path.
<script>
Execute the Python code contained in script, which must be a filesystem path
(absolute or relative) referring to either a Python file, a directory containing
a __main__.py file, or a zipfile containing a __main__.py file.
If this option is given, the first element of sys.argv will be the script name as
given on the command line.
If the script name refers directly to a Python file, the directory containing that file
is added to the start of sys.path, and the file is executed as
the __main__ module.
If the script name refers to a directory or zipfile, the script name is added to the
start of sys.path and the __main__.py file in that location is executed as
the __main__ module.
See also
runpy.run_path()
See also
-V
--version
Print the Python version number and exit. Example output could be:
Python 3.0
-B
If given, Python won’t try to write .pyc or .pyo files on the import of source
modules. See also PYTHONDONTWRITEBYTECODE.
-d
Turn on parser debugging output (for wizards only, depending on compilation
options). See also PYTHONDEBUG.
-E
Ignore all PYTHON* environment variables, e.g. PYTHONPATH and PYTHONHOME,
that might be set.
-i
When a script is passed as first argument or the -c option is used, enter
interactive mode after executing the script or the command, even
when sys.stdin does not appear to be a terminal. The PYTHONSTARTUP file is
not read.
This can be useful to inspect global variables or a stack trace when a script
raises an exception. See also PYTHONINSPECT.
-I
Run Python in isolated mode. This also implies -E and -s. In isolated
mode sys.path contains neither the script’s directory nor the user’s site-
packages directory. All PYTHON* environment variables are ignored, too. Further
restrictions may be imposed to prevent the user from injecting malicious code.
-O
Turn on basic optimizations. This changes the filename extension for compiled
(bytecode) files from .pyc to .pyo. See also PYTHONOPTIMIZE.
-OO
Discard docstrings in addition to the -O optimizations.
-q
Don’t display the copyright and version messages even in interactive mode.
-R
Kept for compatibility. On Python 3.3 and greater, hash randomization is turned
on by default.
PYTHONHASHSEED allows you to set a fixed value for the hash seed secret.
-s
Don’t add the user site-packages directory to sys.path.
See also
-u
Force the binary layer of the stdout and stderr streams (which is available as
their buffer attribute) to be unbuffered. The text I/O layer will still be line-
buffered if writing to the console, or block-buffered if redirected to a non-
interactive file.
-v
Print a message each time a module is initialized, showing the place (filename or
built-in module) from which it is loaded. When given twice (-vv), print a message
for each file that is checked for when searching for a module. Also provides
information on module cleanup at exit. See also PYTHONVERBOSE.
-W
ar
g
Multiple -W options may be given; when a warning matches more than one
option, the action for the last matching option is performed. Invalid -W options are
ignored (though, a warning message is printed about invalid options when the
first warning is issued).
The simplest form of argument is one of the following action strings (or a unique
abbreviation):
ignore
default
Explicitly request the default behavior (printing each warning once per source line).
all
Print a warning each time it occurs (this may generate many messages if a warning is
triggered repeatedly for the same source line, such as inside a loop).
module
Print each warning only the first time it occurs in each module.
once
Print each warning only the first time it occurs in the program.
error
Raise an exception instead of printing a warning message.
action:message:category:module:line
Here, action is as explained above but only applies to
messages that match the remaining fields. Empty fields
match all values; trailing empty fields may be omitted.
The message field matches the start of the warning
message printed; this match is case-insensitive.
The category field matches the warning category. This
must be a class name; the match tests whether the
actual warning category of the message is a subclass
of the specified warning category. The full class name
must be given. The module field matches the (fully-
qualified) module name; this match is case-sensitive.
The line field matches the line number, where zero
matches all line numbers and is thus equivalent to an
omitted line number.
See also
Note
1.2.
Environ
ment
variable
s
These
environment
variables
influence
Python’s
behavior,
they are
processed
before the
command-
line switches
other than -E
or -I. It is
customary
that
command-
line switches
override
environmenta
l variables
where there
is a conflict.
PYTHONHO
ME
Change the location of the standard Python libraries. By default, the libraries are
searched
in prefix/lib/pythonversion and exec_prefix/lib/pythonversion,
where prefix and exec_prefix are installation-dependent directories, both
defaulting to /usr/local.
PYTHONPAT
Augment the default search path for module files. The format is the same as the
shell’s PATH: one or more directory pathnames separated by os.pathsep (e.g.
colons on Unix or semicolons on Windows). Non-existent directories are silently
ignored.
PYTHONSTA
If this is the name of a readable file, the Python commands in that file are
executed before the first prompt is displayed in interactive mode. The file is
executed in the same namespace where interactive commands are executed so
that objects defined or imported in it can be used without qualification in the
interactive session. You can also change the
prompts sys.ps1 and sys.ps2 and the hook sys.__interactivehook__ in
this file.
PYTHONOPT
If this is set to a non-empty string it is equivalent to specifying the -O option. If set
to an integer, it is equivalent to specifying -O multiple times.
PYTHONDEB
If this is set to a non-empty string it is equivalent to specifying the -d option. If set
to an integer, it is equivalent to specifying -d multiple times.
PYTHONINS
If this is set to a non-empty string it is equivalent to specifying the -i option.
This variable can also be modified by Python code using os.environ to force
inspect mode on program termination.
PYTHONUNB
If this is set to a non-empty string it is equivalent to specifying the -u option.
PYTHONVER
If this is set to a non-empty string it is equivalent to specifying the -v option. If set
to an integer, it is equivalent to specifying -v multiple times.
PYTHONCAS
If this is set, Python ignores case in import statements. This only works on
Windows and OS X.
PYTHONDON
If this is set to a non-empty string, Python won’t try to write .pyc or .pyo files on
the import of source modules. This is equivalent to specifying the -B option.
PYTHONHAS
If this variable is not set or set to random, a random value is used to seed the
hashes of str, bytes and datetime objects.
Its purpose is to allow repeatable hashing, such as for selftests for the interpreter
itself, or to allow a cluster of python processes to share hash values.
PYTHONIOE
If this is set before running the interpreter, it overrides the encoding used for
stdin/stdout/stderr, in the syntax encodingname:errorhandler. Both
the encodingname and the :errorhandler parts are optional and have the
same meaning as in str.encode().
For stderr, the :errorhandler part is ignored; the handler will always
be 'backslashreplace'.
PYTHONNOU
If this is set, Python won’t add the user site-
packages directory to sys.path.
See also
PYTHONWAR
This is equivalent to the -W option. If set to a comma separated string, it is
equivalent to specifying -W multiple times.
PYTHONFAU
If this environment variable is set to a non-empty
string, faulthandler.enable() is called at startup: install a handler
for SIGSEGV, SIGFPE, SIGABRT, SIGBUS and SIGILL signals to dump the
Python traceback. This is equivalent to -X faulthandler option.
PYTHONTRA
If this environment variable is set to a non-empty string, start tracing Python
memory allocations using the tracemalloc module. The value of the variable is
the maximum number of frames stored in a traceback of a trace. For
example, PYTHONTRACEMALLOC=1 stores only the most recent frame. See
the tracemalloc.start() for more information.
PYTHONASY
If this environment variable is set to a non-empty string, enable the debug
mode of the asyncio module.
PYTHONTHR
If set, Python will print threading debug info.
PYTHONDUM
If set, Python will dump objects and reference counts still alive after shutting
down the interpreter.
PYTHONMAL
If set, Python will print memory allocation statistics every time a new object arena
is created, and on shutdown.
In the event that Python doesn’t come preinstalled and isn’t in the repositories as well,
you can easily make packages for your own distro. Have a look at the following links:
See also
http://www.debian.org/doc/manuals/maint-guide/first.en.html
pkg_add -r python
pkg_add
ftp://ftp.openbsd.org/pub/OpenBSD/4.2/packages/<i
nsert your architecture here>/python-
<version>.tgz
For example i386 users get the 2.5.1 version of Python using:
pkg_add
ftp://ftp.openbsd.org/pub/OpenBSD/4.2/packages/i3
86/python-2.5.1p2.tgz
2.1.3. On OpenSolaris
You can get Python from OpenCSW. Various versions of Python are
available and can be installed with e.g. pkgutil -i python27.
Warning
For example, on most Linux systems, the default for both is /usr.
File/directory Meaning
Recomme
nded
location of
exec_prefix/bin/python3
the
interprete
r.
containing
the
standard
modules.
Recomme
nded
locations
of the
directories
containing
the
include
files
prefix/include/pythonversion, exec_prefix/
needed
include/pythonversion
for
developin
g Python
extensions
and
embeddin
g the
interprete
r.
2.4. Miscellaneous
To easily use Python scripts on Unix, you need to make them
executable, e.g. with
$ chmod +x script
and put an appropriate Shebang line at the top of the script. A good
choice is usually
#!/usr/bin/env python3
which searches for the Python interpreter in the whole PATH. However,
some Unices may not have the env command, so you may need to
hardcode /usr/bin/python3 as the interpreter path.
2.5. Editors
Vim and Emacs are excellent editors which support Python very well.
For more information on how to code in Python in these editors, look
at:
http://www.vim.org/scripts/script.php?script_id=790
http://sourceforge.net/projects/python-mode
Geany is an excellent IDE with support for a lot of languages. For more
information, read: http://www.geany.org/
Komodo edit is another extremely good IDE. It also has support for a
lot of languages. For more information, read http://komodoide.com/.
With ongoing development of Python, some platforms that used to be supported earlier
are no longer supported (due to the lack of users or developers). Check PEP 11 for
details on all unsupported platforms.
See Python for Windows for detailed information about platforms with pre-compiled
installers.
See also
Python on XP
ActivePython
In this dialog, you can add or modify User and System variables.
To change System variables, you need non-restricted access to
your machine (i.e. Administrator rights).
set PYTHONPATH=%PYTHONPATH%;C:\My_python_lib
echo %PATH%
See also
http://support.microsoft.com/kb/100843
C:\WINDOWS\system32;C:\WINDOWS;C:\
Python33
3. assoc .py=Python.File
5. ftype Python.File=C:\Path\to\
pythonw.exe "%1" %*
py
You should find that the latest version of Python 2.x
you have installed is started - it can be exited as
normal, and any additional command-line arguments
specified will be sent directly to Python.
py -2.6
py -3
#! python
import sys
sys.stdout.write("hello from Python %s\n"
% (sys.version,))
py hello.py
#! python3
Re-executing the command should now print the
latest Python 3.x information. As with the above
command-line examples, you can specify a more
explicit version qualifier. Assuming you have Python
2.6 installed, try changing the first line
to #! python2.6 and you should find the 2.6 version
information printed.
#! /usr/bin/python
#! /usr/bin/python -v
3.4.4. Customization
3.4.4.1. Customization via INI files
Examples:
For example:
[defaults]
python=3.1
[defaults]
python=3
python3=3.1
3.4.5. Diagnostics
If an environment variable PYLAUNCH_DEBUG is set (to
any value), the launcher will print diagnostic
information to stderr (i.e. to the console). While this
information manages to be simultaneously
verbose and terse, it should allow you to see what
versions of Python were located, why a particular
version was chosen and the exact command-line
used to execute the target Python.
3.5.1. PyWin32
The PyWin32 module by Mark Hammond is a
collection of modules for advanced Windows-specific
support. This includes utilities for:
See also
Win32 How Do I...?
by Tim Golden
Python and COM
by David and Paul Boddie
3.5.2. cx_Freeze
cx_Freeze is a distutils extension
(see Extending Distutils) which wraps Python
scripts into executable Windows programs
(*.exe files). When you have done this, you
can distribute your application without
requiring your users to install Python.
3.5.3. WConio
Since Python’s advanced terminal handling
layer, curses, is restricted to Unix-like
systems, there is a library exclusive to
Windows as well: Windows Console I/O for
Python.
See also
Python + Windows + distutils + SWIG + gcc
MinGW
or “Creating Python extensions in C/C++ with SWIG and compiling them with MinGW
gcc under Windows” or “Installing Python extension with distutils and without Microsoft
Visual C++” by Sébastien Sauvage, 2003
MingW – Python extensions
by Trent Apted et al, 2007
“Help for Windows Programmers” by Mark Hammond and Andy Robinson, O’Reilly
Media, 2000, ISBN 1-56592-621-8
A Python for Windows Tutorial
by Amanda Birmingham, 2004
PEP 397 - Python launcher
for Windows
The proposal for the launcher to be included in the Python distribution.
pyvenv /path/to/new/virtual/environment
Running this command creates the target directory (creating any parent directories that
don’t exist already) and places a pyvenv.cfg file in it with a home key pointing to the
Python installation the command was run from. It also creates a bin (or Scripts on
Windows) subdirectory containing a copy of the python binary (or binaries, in the case
of Windows). It also creates an (initially empty) lib/pythonX.Y/site-
packages subdirectory (on Windows, this is Lib\site-packages).
See also
On Windows, you may have to invoke the pyvenv script as follows, if you don’t have the
relevant PATH and PATHEXT settings:
c:\Temp>c:\Python34\python c:\Python34\Tools\Scripts\pyvenv.py
myenv
or equivalently:
The command, if run with -h, will show the available options:
positional arguments:
ENV_DIR A directory to create the environment in.
optional arguments:
-h, --help show this help message and exit
--system-site-packages Give access to the global site-packages
dir to the
virtual environment.
--symlinks Try to use symlinks rather than copies,
when symlinks
are not the default for the platform.
--copies Try to use copies rather than symlinks,
even when
symlinks are the default for the platform.
--clear Delete the environment directory if it
already exists.
If not specified and the directory exists,
an error is
raised.
--upgrade Upgrade the environment directory to use
this version
of Python, assuming Python has been
upgraded in-place.
--without-pip Skips installing or upgrading pip in the
virtual
environment (pip is bootstrapped by
default)
Depending on how the venv functionality has been invoked, the usage message may
vary slightly, e.g. referencing pyvenv rather than venv.
Changed in version 3.4: Installs pip by default, added the --without-pip and --
copies options
Changed in version 3.4: In earlier versions, if the target directory already existed, an
error was raised, unless the --clear or --upgrade option was provided. Now, if an
existing directory is specified, its contents are removed and the directory is processed
as if it had been newly created.
Multiple paths can be given to pyvenv, in which case an identical virtualenv will be
created, according to the given options, at each provided path.
Once a venv has been created, it can be “activated” using a script in the venv’s binary
directory. The invocation of the script is platform-specific:
Platform Shell Command to activate virtual environment
fish $ . <venv>/bin/activate.fish
Window
cmd.exe C:> <venv>/Scripts/activate.bat
s
You don’t specifically need to activate an environment; activation just prepends the
venv’s binary directory to your path, so that “python” invokes the venv’s Python
interpreter and you can run installed scripts without having to use their full path.
However, all scripts installed in a venv should be runnable without activating it, and run
with the venv’s Python automatically.
You can deactivate a venv by typing “deactivate” in your shell. The exact mechanism is
platform-specific: for example, the Bash activation script defines a “deactivate” function,
whereas on Windows there are separate scripts
called deactivate.bat and Deactivate.ps1 which are installed when the venv is
created.
1. Introduction
o 1.1. Alternate Implementations
o 1.2. Notation
2. Lexical analysis
o 2.1. Line structure
o 2.2. Other tokens
o 2.3. Identifiers and keywords
o 2.4. Literals
o 2.5. Operators
o 2.6. Delimiters
3. Data model
o 3.1. Objects, values and types
o 3.2. The standard type hierarchy
o 3.3. Special method names
4. Execution model
o 4.1. Structure of a program
o 4.2. Naming and binding
o 4.3. Exceptions
5. The import system
o 5.1. importlib
o 5.2. Packages
o 5.3. Searching
o 5.4. Loading
o 5.5. The Path Based Finder
o 5.6. Replacing the standard import system
o 5.7. Special considerations for __main__
o 5.8. Open issues
o 5.9. References
6. Expressions
o 6.1. Arithmetic conversions
o 6.2. Atoms
o 6.3. Primaries
o 6.4. The power operator
o 6.5. Unary arithmetic and bitwise operations
o 6.6. Binary arithmetic operations
o 6.7. Shifting operations
o 6.8. Binary bitwise operations
o 6.9. Comparisons
o 6.10. Boolean operations
o 6.11. Conditional expressions
o 6.12. Lambdas
o 6.13. Expression lists
o 6.14. Evaluation order
o 6.15. Operator precedence
7. Simple statements
o 7.1. Expression statements
o 7.2. Assignment statements
o 7.3. The assert statement
o 7.4. The pass statement
o 7.5. The del statement
o 7.6. The return statement
o 7.7. The yield statement
o 7.8. The raise statement
o 7.9. The break statement
o 7.10. The continue statement
o 7.11. The import statement
o 7.12. The global statement
o 7.13. The nonlocal statement
8. Compound statements
o 8.1. The if statement
o 8.2. The while statement
o 8.3. The for statement
o 8.4. The try statement
o 8.5. The with statement
o 8.6. Function definitions
o 8.7. Class definitions
9. Top-level components
o 9.1. Complete Python programs
o 9.2. File input
o 9.3. Interactive input
o 9.4. Expression input
10. Full Grammar specification