C Api
C Api
Release 3.9.6
1 Introduction 3
1.1 Coding standards . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.2 Include Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.3 Useful macros . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.4 Objects, Types and Reference Counts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.4.1 Reference Counts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.4.2 Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
1.5 Exceptions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
1.6 Embedding Python . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
1.7 Debugging Builds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
4 Reference Counting 21
5 Exception Handling 23
5.1 Printing and clearing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
5.2 Raising exceptions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
5.3 Issuing warnings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
5.4 Querying the error indicator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
5.5 Signal Handling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
5.6 Exception Classes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
5.7 Exception Objects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
5.8 Unicode Exception Objects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
5.9 Recursion Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
5.10 Standard Exceptions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
5.11 Standard Warning Categories . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
6 Utilities 35
6.1 Operating System Utilities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
6.2 System Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
6.3 Process Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
6.4 Importing Modules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
6.5 Data marshalling support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
6.6 Parsing arguments and building values . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
6.6.1 Parsing arguments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
6.6.2 Building values . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
6.7 String conversion and formatting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
6.8 Reflection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
i
6.9 Codec registry and support functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
6.9.1 Codec lookup API . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
6.9.2 Registry API for Unicode encoding error handlers . . . . . . . . . . . . . . . . . . . . . . . 55
ii
8.6.7 MemoryView objects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134
8.6.8 Weak Reference Objects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134
8.6.9 Capsules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135
8.6.10 Generator Objects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137
8.6.11 Coroutine Objects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137
8.6.12 Context Variables Objects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138
8.6.13 DateTime Objects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139
8.6.14 Objects for Type Hinting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142
iii
12 Object Implementation Support 189
12.1 Allocating Objects on the Heap . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189
12.2 Common Object Structures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 190
12.2.1 Base object types and macros . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 190
12.2.2 Implementing functions and methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 191
12.2.3 Accessing attributes of extension types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 194
12.3 Type Objects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 195
12.3.1 Quick Reference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 196
12.3.2 PyTypeObject Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 201
12.3.3 PyObject Slots . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 202
12.3.4 PyVarObject Slots . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203
12.3.5 PyTypeObject Slots . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203
12.3.6 Heap Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 220
12.4 Number Object Structures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 220
12.5 Mapping Object Structures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 222
12.6 Sequence Object Structures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 223
12.7 Buffer Object Structures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 224
12.8 Async Object Structures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 225
12.9 Slot Type typedefs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 225
12.10 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 227
12.11 Supporting Cyclic Garbage Collection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 229
A Glossary 235
iv
C.3.18 W3C C14N test suite . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 267
D Copyright 269
Index 271
v
vi
The Python/C API, Release 3.9.6
This manual documents the API used by C and C++ programmers who want to write extension modules or embed Python.
It is a companion to extending-index, which describes the general principles of extension writing but does not document
the API functions in detail.
CONTENTS 1
The Python/C API, Release 3.9.6
2 CONTENTS
CHAPTER
ONE
INTRODUCTION
The Application Programmer’s Interface to Python gives C and C++ programmers access to the Python interpreter at a
variety of levels. The API is equally usable from C++, but for brevity it is generally referred to as the Python/C API.
There are two fundamentally different reasons for using the Python/C API. The first reason is to write extension modules
for specific purposes; these are C modules that extend the Python interpreter. This is probably the most common use. The
second reason is to use Python as a component in a larger application; this technique is generally referred to as embedding
Python in an application.
Writing an extension module is a relatively well-understood process, where a “cookbook” approach works well. There are
several tools that automate the process to some extent. While people have embedded Python in other applications since
its early existence, the process of embedding Python is less straightforward than writing an extension.
Many API functions are useful independent of whether you’re embedding or extending Python; moreover, most applica-
tions that embed Python will need to provide a custom extension as well, so it’s probably a good idea to become familiar
with writing an extension before attempting to embed Python in a real application.
If you’re writing C code for inclusion in CPython, you must follow the guidelines and standards defined in PEP 7. These
guidelines apply regardless of the version of Python you are contributing to. Following these conventions is not necessary
for your own third party extension modules, unless you eventually expect to contribute them to Python.
All function, type and macro definitions needed to use the Python/C API are included in your code by the following line:
#define PY_SSIZE_T_CLEAN
#include <Python.h>
This implies inclusion of the following standard headers: <stdio.h>, <string.h>, <errno.h>, <limits.h>,
<assert.h> and <stdlib.h> (if available).
Note: Since Python may define some pre-processor definitions which affect the standard headers on some systems, you
must include Python.h before any standard headers are included.
It is recommended to always define PY_SSIZE_T_CLEAN before including Python.h. See Parsing arguments and
building values for a description of this macro.
3
The Python/C API, Release 3.9.6
All user visible names defined by Python.h (except those defined by the included standard headers) have one of the prefixes
Py or _Py. Names beginning with _Py are for internal use by the Python implementation and should not be used by
extension writers. Structure member names do not have a reserved prefix.
Note: User code should never define names that begin with Py or _Py. This confuses the reader, and jeopardizes the
portability of the user code to future Python versions, which may define additional names beginning with one of these
prefixes.
The header files are typically installed with Python. On Unix, these are located in the directories prefix/include/
pythonversion/ and exec_prefix/include/pythonversion/, where prefix and exec_prefix
are defined by the corresponding parameters to Python’s configure script and version is '%d.%d' % sys.
version_info[:2]. On Windows, the headers are installed in prefix/include, where prefix is the in-
stallation directory specified to the installer.
To include the headers, place both directories (if different) on your compiler’s search path for includes. Do not place
the parent directories on the search path and then use #include <pythonX.Y/Python.h>; this will break on
multi-platform builds since the platform independent headers under prefix include the platform specific headers from
exec_prefix.
C++ users should note that although the API is defined entirely using C, the header files properly declare the entry points
to be extern "C". As a result, there is no need to do anything special to use the API from C++.
Several useful macros are defined in the Python header files. Many are defined closer to where they are useful (e.g.
Py_RETURN_NONE). Others of a more general utility are defined here. This is not necessarily a complete listing.
Py_UNREACHABLE()
Use this when you have a code path that cannot be reached by design. For example, in the default: clause in
a switch statement for which all possible values are covered in case statements. Use this in places where you
might be tempted to put an assert(0) or abort() call.
In release mode, the macro helps the compiler to optimize the code, and avoids a warning about unreachable code.
For example, the macro is implemented with __builtin_unreachable() on GCC in release mode.
A use for Py_UNREACHABLE() is following a call a function that never returns but that is not declared
_Py_NO_RETURN.
If a code path is very unlikely code but can be reached under exceptional case, this macro must not be used. For
example, under low memory condition or if a system call returns a value out of the expected range. In this case,
it’s better to report the error to the caller. If the error cannot be reported to caller, Py_FatalError() can be
used.
New in version 3.7.
Py_ABS(x)
Return the absolute value of x.
New in version 3.3.
Py_MIN(x, y)
Return the minimum value between x and y.
New in version 3.3.
Py_MAX(x, y)
Return the maximum value between x and y.
4 Chapter 1. Introduction
The Python/C API, Release 3.9.6
PyDoc_STR(str)
Creates a docstring for the given input string or an empty string if docstrings are disabled.
Use PyDoc_STR in specifying docstrings to support building Python without docstrings, as specified in PEP 7.
Example:
Most Python/C API functions have one or more arguments as well as a return value of type PyObject*. This type is a
pointer to an opaque data type representing an arbitrary Python object. Since all Python object types are treated the same
way by the Python language in most situations (e.g., assignments, scope rules, and argument passing), it is only fitting that
they should be represented by a single C type. Almost all Python objects live on the heap: you never declare an automatic
or static variable of type PyObject, only pointer variables of type PyObject* can be declared. The sole exception
are the type objects; since these must never be deallocated, they are typically static PyTypeObject objects.
All Python objects (even Python integers) have a type and a reference count. An object’s type determines what kind of
object it is (e.g., an integer, a list, or a user-defined function; there are many more as explained in types). For each of the
well-known types there is a macro to check whether an object is of that type; for instance, PyList_Check(a) is true
if (and only if) the object pointed to by a is a Python list.
The reference count is important because today’s computers have a finite (and often severely limited) memory size; it
counts how many different places there are that have a reference to an object. Such a place could be another object, or a
global (or static) C variable, or a local variable in some C function. When an object’s reference count becomes zero, the
object is deallocated. If it contains references to other objects, their reference count is decremented. Those other objects
may be deallocated in turn, if this decrement makes their reference count become zero, and so on. (There’s an obvious
problem with objects that reference each other here; for now, the solution is “don’t do that.”)
Reference counts are always manipulated explicitly. The normal way is to use the macro Py_INCREF() to increment an
object’s reference count by one, and Py_DECREF() to decrement it by one. The Py_DECREF() macro is considerably
more complex than the incref one, since it must check whether the reference count becomes zero and then cause the
object’s deallocator to be called. The deallocator is a function pointer contained in the object’s type structure. The type-
specific deallocator takes care of decrementing the reference counts for other objects contained in the object if this is a
compound object type, such as a list, as well as performing any additional finalization that’s needed. There’s no chance that
the reference count can overflow; at least as many bits are used to hold the reference count as there are distinct memory
locations in virtual memory (assuming sizeof(Py_ssize_t) >= sizeof(void*)). Thus, the reference count
increment is a simple operation.
It is not necessary to increment an object’s reference count for every local variable that contains a pointer to an object. In
theory, the object’s reference count goes up by one when the variable is made to point to it and it goes down by one when
the variable goes out of scope. However, these two cancel each other out, so at the end the reference count hasn’t changed.
The only real reason to use the reference count is to prevent the object from being deallocated as long as our variable is
pointing to it. If we know that there is at least one other reference to the object that lives at least as long as our variable,
there is no need to increment the reference count temporarily. An important situation where this arises is in objects that
are passed as arguments to C functions in an extension module that are called from Python; the call mechanism guarantees
to hold a reference to every argument for the duration of the call.
However, a common pitfall is to extract an object from a list and hold on to it for a while without incrementing its
reference count. Some other operation might conceivably remove the object from the list, decrementing its reference
count and possibly deallocating it. The real danger is that innocent-looking operations may invoke arbitrary Python code
which could do this; there is a code path which allows control to flow back to the user from a Py_DECREF(), so almost
any operation is potentially dangerous.
A safe approach is to always use the generic operations (functions whose name begins with PyObject_, PyNumber_,
PySequence_ or PyMapping_). These operations always increment the reference count of the object they return.
This leaves the caller with the responsibility to call Py_DECREF() when they are done with the result; this soon becomes
second nature.
6 Chapter 1. Introduction
The Python/C API, Release 3.9.6
The reference count behavior of functions in the Python/C API is best explained in terms of ownership of references.
Ownership pertains to references, never to objects (objects are not owned: they are always shared). “Owning a reference”
means being responsible for calling Py_DECREF on it when the reference is no longer needed. Ownership can also
be transferred, meaning that the code that receives ownership of the reference then becomes responsible for eventually
decref’ing it by calling Py_DECREF() or Py_XDECREF() when it’s no longer needed—or passing on this responsibility
(usually to its caller). When a function passes ownership of a reference on to its caller, the caller is said to receive a new
reference. When no ownership is transferred, the caller is said to borrow the reference. Nothing needs to be done for a
borrowed reference.
Conversely, when a calling function passes in a reference to an object, there are two possibilities: the function steals a
reference to the object, or it does not. Stealing a reference means that when you pass a reference to a function, that function
assumes that it now owns that reference, and you are not responsible for it any longer.
Few functions steal references; the two notable exceptions are PyList_SetItem() and PyTuple_SetItem(),
which steal a reference to the item (but not to the tuple or list into which the item is put!). These functions were designed
to steal a reference because of a common idiom for populating a tuple or list with newly created objects; for example,
the code to create the tuple (1, 2, "three") could look like this (forgetting about error handling for the moment;
a better way to code this is shown below):
PyObject *t;
t = PyTuple_New(3);
PyTuple_SetItem(t, 0, PyLong_FromLong(1L));
PyTuple_SetItem(t, 1, PyLong_FromLong(2L));
PyTuple_SetItem(t, 2, PyUnicode_FromString("three"));
Here, PyLong_FromLong() returns a new reference which is immediately stolen by PyTuple_SetItem(). When
you want to keep using an object although the reference to it will be stolen, use Py_INCREF() to grab another reference
before calling the reference-stealing function.
Incidentally, PyTuple_SetItem() is the only way to set tuple items; PySequence_SetItem() and
PyObject_SetItem() refuse to do this since tuples are an immutable data type. You should only use
PyTuple_SetItem() for tuples that you are creating yourself.
Equivalent code for populating a list can be written using PyList_New() and PyList_SetItem().
However, in practice, you will rarely use these ways of creating and populating a tuple or list. There’s a generic function,
Py_BuildValue(), that can create most common objects from C values, directed by a format string. For example,
the above two blocks of code could be replaced by the following (which also takes care of the error checking):
It is much more common to use PyObject_SetItem() and friends with items whose references you are only borrow-
ing, like arguments that were passed in to the function you are writing. In that case, their behaviour regarding reference
counts is much saner, since you don’t have to increment a reference count so you can give a reference away (“have it be
stolen”). For example, this function sets all items of a list (actually, any mutable sequence) to a given item:
int
set_all(PyObject *target, PyObject *item)
{
Py_ssize_t i, n;
The situation is slightly different for function return values. While passing a reference to most functions does not change
your ownership responsibilities for that reference, many functions that return a reference to an object give you ownership of
the reference. The reason is simple: in many cases, the returned object is created on the fly, and the reference you get is the
only reference to the object. Therefore, the generic functions that return object references, like PyObject_GetItem()
and PySequence_GetItem(), always return a new reference (the caller becomes the owner of the reference).
It is important to realize that whether you own a reference returned by a function depends on which function you call only
— the plumage (the type of the object passed as an argument to the function) doesn’t enter into it! Thus, if you extract
an item from a list using PyList_GetItem(), you don’t own the reference — but if you obtain the same item from
the same list using PySequence_GetItem() (which happens to take exactly the same arguments), you do own a
reference to the returned object.
Here is an example of how you could write a function that computes the sum of the items in a list of integers; once using
PyList_GetItem(), and once using PySequence_GetItem().
long
sum_list(PyObject *list)
{
Py_ssize_t i, n;
long total = 0, value;
PyObject *item;
n = PyList_Size(list);
if (n < 0)
return -1; /* Not a list */
for (i = 0; i < n; i++) {
item = PyList_GetItem(list, i); /* Can't fail */
if (!PyLong_Check(item)) continue; /* Skip non-integers */
value = PyLong_AsLong(item);
if (value == -1 && PyErr_Occurred())
/* Integer too big to fit in a C long, bail out */
return -1;
total += value;
}
return total;
}
long
sum_sequence(PyObject *sequence)
{
(continues on next page)
8 Chapter 1. Introduction
The Python/C API, Release 3.9.6
1.4.2 Types
There are few other data types that play a significant role in the Python/C API; most are simple C types such as int,
long, double and char*. A few structure types are used to describe static tables used to list the functions exported
by a module or the data attributes of a new object type, and another is used to describe the value of a complex number.
These will be discussed together with the functions that use them.
1.5 Exceptions
The Python programmer only needs to deal with exceptions if specific error handling is required; unhandled exceptions
are automatically propagated to the caller, then to the caller’s caller, and so on, until they reach the top-level interpreter,
where they are reported to the user accompanied by a stack traceback.
For C programmers, however, error checking always has to be explicit. All functions in the Python/C API can raise
exceptions, unless an explicit claim is made otherwise in a function’s documentation. In general, when a function en-
counters an error, it sets an exception, discards any object references that it owns, and returns an error indicator. If not
documented otherwise, this indicator is either NULL or -1, depending on the function’s return type. A few functions
return a Boolean true/false result, with false indicating an error. Very few functions return no explicit error indicator or
have an ambiguous return value, and require explicit testing for errors with PyErr_Occurred(). These exceptions
are always explicitly documented.
Exception state is maintained in per-thread storage (this is equivalent to using global storage in an unthreaded application).
A thread can be in one of two states: an exception has occurred, or not. The function PyErr_Occurred() can be used
to check for this: it returns a borrowed reference to the exception type object when an exception has occurred, and NULL
otherwise. There are a number of functions to set the exception state: PyErr_SetString() is the most common
(though not the most general) function to set the exception state, and PyErr_Clear() clears the exception state.
1.5. Exceptions 9
The Python/C API, Release 3.9.6
The full exception state consists of three objects (all of which can be NULL): the exception type, the corresponding
exception value, and the traceback. These have the same meanings as the Python result of sys.exc_info(); however,
they are not the same: the Python objects represent the last exception being handled by a Python try … except
statement, while the C level exception state only exists while an exception is being passed on between C functions until
it reaches the Python bytecode interpreter’s main loop, which takes care of transferring it to sys.exc_info() and
friends.
Note that starting with Python 1.5, the preferred, thread-safe way to access the exception state from Python code is to call
the function sys.exc_info(), which returns the per-thread exception state for Python code. Also, the semantics of
both ways to access the exception state have changed so that a function which catches an exception will save and restore
its thread’s exception state so as to preserve the exception state of its caller. This prevents common bugs in exception
handling code caused by an innocent-looking function overwriting the exception being handled; it also reduces the often
unwanted lifetime extension for objects that are referenced by the stack frames in the traceback.
As a general principle, a function that calls another function to perform some task should check whether the called function
raised an exception, and if so, pass the exception state on to its caller. It should discard any object references that it owns,
and return an error indicator, but it should not set another exception — that would overwrite the exception that was just
raised, and lose important information about the exact cause of the error.
A simple example of detecting exceptions and passing them on is shown in the sum_sequence() example above. It so
happens that this example doesn’t need to clean up any owned references when it detects an error. The following example
function shows some error cleanup. First, to remind you why you like Python, we show the equivalent Python code:
def incr_item(dict, key):
try:
item = dict[key]
except KeyError:
item = 0
dict[key] = item + 1
10 Chapter 1. Introduction
The Python/C API, Release 3.9.6
error:
/* Cleanup code, shared by success and failure path */
This example represents an endorsed use of the goto statement in C! It illustrates the use of
PyErr_ExceptionMatches() and PyErr_Clear() to handle specific exceptions, and the use of
Py_XDECREF() to dispose of owned references that may be NULL (note the 'X' in the name; Py_DECREF()
would crash when confronted with a NULL reference). It is important that the variables used to hold owned references
are initialized to NULL for this to work; likewise, the proposed return value is initialized to -1 (failure) and only set to
success after the final call made is successful.
The one important task that only embedders (as opposed to extension writers) of the Python interpreter have to worry
about is the initialization, and possibly the finalization, of the Python interpreter. Most functionality of the interpreter
can only be used after the interpreter has been initialized.
The basic initialization function is Py_Initialize(). This initializes the table of loaded modules, and creates the
fundamental modules builtins, __main__, and sys. It also initializes the module search path (sys.path).
Py_Initialize() does not set the “script argument list” (sys.argv). If this variable is needed by Python code that
will be executed later, it must be set explicitly with a call to PySys_SetArgvEx(argc, argv, updatepath)
after the call to Py_Initialize().
On most systems (in particular, on Unix and Windows, although the details are slightly different), Py_Initialize()
calculates the module search path based upon its best guess for the location of the standard Python interpreter executable,
assuming that the Python library is found in a fixed location relative to the Python interpreter executable. In particular, it
looks for a directory named lib/pythonX.Y relative to the parent directory where the executable named python is
found on the shell command search path (the environment variable PATH).
For instance, if the Python executable is found in /usr/local/bin/python, it will assume that the libraries are in /
usr/local/lib/pythonX.Y. (In fact, this particular path is also the “fallback” location, used when no executable
file named python is found along PATH.) The user can override this behavior by setting the environment variable
PYTHONHOME, or insert additional directories in front of the standard path by setting PYTHONPATH.
The embedding application can steer the search by calling Py_SetProgramName(file) before calling
Py_Initialize(). Note that PYTHONHOME still overrides this and PYTHONPATH is still inserted in front of the
standard path. An application that requires total control has to provide its own implementation of Py_GetPath(),
Py_GetPrefix(), Py_GetExecPrefix(), and Py_GetProgramFullPath() (all defined in Modules/
getpath.c).
Sometimes, it is desirable to “uninitialize” Python. For instance, the application may want to start over (make another call
to Py_Initialize()) or the application is simply done with its use of Python and wants to free memory allocated by
Python. This can be accomplished by calling Py_FinalizeEx(). The function Py_IsInitialized() returns
true if Python is currently in the initialized state. More information about these functions is given in a later chapter.
Notice that Py_FinalizeEx() does not free all memory allocated by the Python interpreter, e.g. memory allocated
by extension modules currently cannot be released.
Python can be built with several macros to enable extra checks of the interpreter and extension modules. These checks
tend to add a large amount of overhead to the runtime so they are not enabled by default.
A full list of the various types of debugging builds is in the file Misc/SpecialBuilds.txt in the Python source
distribution. Builds are available that support tracing of reference counts, debugging the memory allocator, or low-level
profiling of the main interpreter loop. Only the most frequently-used builds will be described in the remainder of this
section.
Compiling the interpreter with the Py_DEBUG macro defined produces what is generally meant by “a debug build” of
Python. Py_DEBUG is enabled in the Unix build by adding --with-pydebug to the ./configure command. It is
also implied by the presence of the not-Python-specific _DEBUG macro. When Py_DEBUG is enabled in the Unix build,
compiler optimization is disabled.
In addition to the reference count debugging described below, the following extra checks are performed:
• Extra checks are added to the object allocator.
• Extra checks are added to the parser and compiler.
• Downcasts from wide types to narrow types are checked for loss of information.
• A number of assertions are added to the dictionary and set implementations. In addition, the set object acquires a
test_c_api() method.
• Sanity checks of the input arguments are added to frame creation.
• The storage for ints is initialized with a known invalid pattern to catch reference to uninitialized digits.
• Low-level tracing and extra exception checking are added to the runtime virtual machine.
• Extra checks are added to the memory arena implementation.
• Extra debugging is added to the thread module.
There may be additional checks not mentioned here.
Defining Py_TRACE_REFS enables reference tracing. When defined, a circular doubly linked list of active objects
is maintained by adding two extra fields to every PyObject. Total allocations are tracked as well. Upon exit, all
existing references are printed. (In interactive mode this happens after every statement run by the interpreter.) Implied
by Py_DEBUG.
Please refer to Misc/SpecialBuilds.txt in the Python source distribution for more detailed information.
12 Chapter 1. Introduction
CHAPTER
TWO
Traditionally, the C API of Python will change with every release. Most changes will be source-compatible, typically
by only adding API, rather than changing existing API or removing API (although some interfaces do get removed after
being deprecated first).
Unfortunately, the API compatibility does not extend to binary compatibility (the ABI). The reason is primarily the
evolution of struct definitions, where addition of a new field, or changing the type of a field, might not break the API, but
can break the ABI. As a consequence, extension modules need to be recompiled for every Python release (although an
exception is possible on Unix when none of the affected interfaces are used). In addition, on Windows, extension modules
link with a specific pythonXY.dll and need to be recompiled to link with a newer one.
Since Python 3.2, a subset of the API has been declared to guarantee a stable ABI. Extension modules wishing to use
this API (called “limited API”) need to define Py_LIMITED_API. A number of interpreter details then become hidden
from the extension module; in return, a module is built that works on any 3.x version (x>=2) without recompilation.
In some cases, the stable ABI needs to be extended with new functions. Extension modules wishing to use these new
APIs need to set Py_LIMITED_API to the PY_VERSION_HEX value (see API and ABI Versioning) of the minimum
Python version they want to support (e.g. 0x03030000 for Python 3.3). Such modules will work on all subsequent
Python releases, but fail to load (because of missing symbols) on the older releases.
As of Python 3.2, the set of functions available to the limited API is documented in PEP 384. In the C API documentation,
API elements that are not part of the limited API are marked as “Not part of the limited API.”
13
The Python/C API, Release 3.9.6
THREE
The functions in this chapter will let you execute Python source code given in a file or a buffer, but they will not let you
interact in a more detailed way with the interpreter.
Several of these functions accept a start symbol from the grammar as a parameter. The available start symbols are
Py_eval_input, Py_file_input, and Py_single_input. These are described following the functions which
accept them as parameters.
Note also that several of these functions take FILE* parameters. One particular issue which needs to be handled carefully
is that the FILE structure for different C libraries can be different and incompatible. Under Windows (at least), it
is possible for dynamically linked extensions to actually use different libraries, so care should be taken that FILE*
parameters are only passed to these functions if it is certain that they were created by the same library that the Python
runtime is using.
int Py_Main(int argc, wchar_t **argv)
The main program for the standard interpreter. This is made available for programs which embed Python. The
argc and argv parameters should be prepared exactly as those which are passed to a C program’s main() function
(converted to wchar_t according to the user’s locale). It is important to note that the argument list may be modified
(but the contents of the strings pointed to by the argument list are not). The return value will be 0 if the interpreter
exits normally (i.e., without an exception), 1 if the interpreter exits due to an exception, or 2 if the parameter list
does not represent a valid Python command line.
Note that if an otherwise unhandled SystemExit is raised, this function will not return 1, but exit the process,
as long as Py_InspectFlag is not set.
int Py_BytesMain(int argc, char **argv)
Similar to Py_Main() but argv is an array of bytes strings.
New in version 3.8.
int PyRun_AnyFile(FILE *fp, const char *filename)
This is a simplified interface to PyRun_AnyFileExFlags() below, leaving closeit set to 0 and flags set to
NULL.
int PyRun_AnyFileFlags(FILE *fp, const char *filename, PyCompilerFlags *flags)
This is a simplified interface to PyRun_AnyFileExFlags() below, leaving the closeit argument set to 0.
int PyRun_AnyFileEx(FILE *fp, const char *filename, int closeit)
This is a simplified interface to PyRun_AnyFileExFlags() below, leaving the flags argument set to NULL.
int PyRun_AnyFileExFlags(FILE *fp, const char *filename, int closeit, PyCompilerFlags *flags)
If fp refers to a file associated with an interactive device (console or terminal input or Unix pseudo-terminal),
return the value of PyRun_InteractiveLoop(), otherwise return the result of PyRun_SimpleFile().
filename is decoded from the filesystem encoding (sys.getfilesystemencoding()). If filename is NULL,
this function uses "???" as the filename.
15
The Python/C API, Release 3.9.6
Note: On Windows, fp should be opened as binary mode (e.g. fopen(filename, "rb")). Otherwise,
Python may not handle script file with LF line ending correctly.
standard input file, returning the resulting string. For example, The readline module sets this hook to provide
line-editing and tab-completion features.
The result must be a string allocated by PyMem_RawMalloc() or PyMem_RawRealloc(), or NULL if an
error occurred.
Changed in version 3.4: The result must be allocated by PyMem_RawMalloc() or PyMem_RawRealloc(),
instead of being allocated by PyMem_Malloc() or PyMem_Realloc().
struct _node* PyParser_SimpleParseString(const char *str, int start)
This is a simplified interface to PyParser_SimpleParseStringFlagsFilename() below, leaving file-
name set to NULL and flags set to 0.
Deprecated since version 3.9, will be removed in version 3.10.
struct _node* PyParser_SimpleParseStringFlags(const char *str, int start, int flags)
This is a simplified interface to PyParser_SimpleParseStringFlagsFilename() below, leaving file-
name set to NULL.
Deprecated since version 3.9, will be removed in version 3.10.
struct _node* PyParser_SimpleParseStringFlagsFilename(const char *str, const char *filename,
int start, int flags)
Parse Python source code from str using the start token start according to the flags argument. The result can be
used to create a code object which can be evaluated efficiently. This is useful if a code fragment must be evaluated
many times. filename is decoded from the filesystem encoding (sys.getfilesystemencoding()).
Deprecated since version 3.9, will be removed in version 3.10.
struct _node* PyParser_SimpleParseFile(FILE *fp, const char *filename, int start)
This is a simplified interface to PyParser_SimpleParseFileFlags() below, leaving flags set to 0.
Deprecated since version 3.9, will be removed in version 3.10.
struct _node* PyParser_SimpleParseFileFlags(FILE *fp, const char *filename, int start, int flags)
Similar to PyParser_SimpleParseStringFlagsFilename(), but the Python source code is read from
fp instead of an in-memory string.
Deprecated since version 3.9, will be removed in version 3.10.
PyObject* PyRun_String(const char *str, int start, PyObject *globals, PyObject *locals)
Return value: New reference. This is a simplified interface to PyRun_StringFlags() below, leaving flags set
to NULL.
PyObject* PyRun_StringFlags(const char *str, int start, PyObject *globals, PyObject *locals, PyCompiler-
Flags *flags)
Return value: New reference. Execute Python source code from str in the context specified by the objects globals
and locals with the compiler flags specified by flags. globals must be a dictionary; locals can be any object that
implements the mapping protocol. The parameter start specifies the start token that should be used to parse the
source code.
Returns the result of executing the code as a Python object, or NULL if an exception was raised.
PyObject* PyRun_File(FILE *fp, const char *filename, int start, PyObject *globals, PyObject *locals)
Return value: New reference. This is a simplified interface to PyRun_FileExFlags() below, leaving closeit
set to 0 and flags set to NULL.
PyObject* PyRun_FileEx(FILE *fp, const char *filename, int start, PyObject *globals, PyObject *locals,
int closeit)
Return value: New reference. This is a simplified interface to PyRun_FileExFlags() below, leaving flags set
to NULL.
17
The Python/C API, Release 3.9.6
PyObject* PyRun_FileFlags(FILE *fp, const char *filename, int start, PyObject *globals, PyObject *locals,
PyCompilerFlags *flags)
Return value: New reference. This is a simplified interface to PyRun_FileExFlags() below, leaving closeit
set to 0.
PyObject* PyRun_FileExFlags(FILE *fp, const char *filename, int start, PyObject *globals, PyObject *lo-
cals, int closeit, PyCompilerFlags *flags)
Return value: New reference. Similar to PyRun_StringFlags(), but the Python source code is read from fp
instead of an in-memory string. filename should be the name of the file, it is decoded from the filesystem encoding
(sys.getfilesystemencoding()). If closeit is true, the file is closed before PyRun_FileExFlags()
returns.
PyObject* Py_CompileString(const char *str, const char *filename, int start)
Return value: New reference. This is a simplified interface to Py_CompileStringFlags() below, leaving
flags set to NULL.
PyObject* Py_CompileStringFlags(const char *str, const char *filename, int start, PyCompiler-
Flags *flags)
Return value: New reference. This is a simplified interface to Py_CompileStringExFlags() below, with
optimize set to -1.
PyObject* Py_CompileStringObject(const char *str, PyObject *filename, int start, PyCompiler-
Flags *flags, int optimize)
Return value: New reference. Parse and compile the Python source code in str, returning the resulting code object.
The start token is given by start; this can be used to constrain the code which can be compiled and should be
Py_eval_input, Py_file_input, or Py_single_input. The filename specified by filename is used
to construct the code object and may appear in tracebacks or SyntaxError exception messages. This returns
NULL if the code cannot be parsed or compiled.
The integer optimize specifies the optimization level of the compiler; a value of -1 selects the optimization level of
the interpreter as given by -O options. Explicit levels are 0 (no optimization; __debug__ is true), 1 (asserts are
removed, __debug__ is false) or 2 (docstrings are removed too).
New in version 3.4.
PyObject* Py_CompileStringExFlags(const char *str, const char *filename, int start, PyCompiler-
Flags *flags, int optimize)
Return value: New reference. Like Py_CompileStringObject(), but filename is a byte string decoded from
the filesystem encoding (os.fsdecode()).
New in version 3.2.
PyObject* PyEval_EvalCode(PyObject *co, PyObject *globals, PyObject *locals)
Return value: New reference. This is a simplified interface to PyEval_EvalCodeEx(), with just the code
object, and global and local variables. The other arguments are set to NULL.
PyObject* PyEval_EvalCodeEx(PyObject *co, PyObject *globals, PyObject *locals, PyObject *const *args,
int argcount, PyObject *const *kws, int kwcount, PyObject *const *defs,
int defcount, PyObject *kwdefs, PyObject *closure)
Return value: New reference. Evaluate a precompiled code object, given a particular environment for its evalua-
tion. This environment consists of a dictionary of global variables, a mapping object of local variables, arrays of
arguments, keywords and defaults, a dictionary of default values for keyword-only arguments and a closure tuple
of cells.
PyFrameObject
The C structure of the objects used to describe frame objects. The fields of this type are subject to change at any
time.
PyObject* PyEval_EvalFrame(PyFrameObject *f)
Return value: New reference. Evaluate an execution frame. This is a simplified interface to
PyEval_EvalFrameEx(), for backward compatibility.
19
The Python/C API, Release 3.9.6
FOUR
REFERENCE COUNTING
The macros in this section are used for managing reference counts of Python objects.
void Py_INCREF(PyObject *o)
Increment the reference count for object o. The object must not be NULL; if you aren’t sure that it isn’t NULL, use
Py_XINCREF().
void Py_XINCREF(PyObject *o)
Increment the reference count for object o. The object may be NULL, in which case the macro has no effect.
void Py_DECREF(PyObject *o)
Decrement the reference count for object o. The object must not be NULL; if you aren’t sure that it isn’t NULL, use
Py_XDECREF(). If the reference count reaches zero, the object’s type’s deallocation function (which must not
be NULL) is invoked.
Warning: The deallocation function can cause arbitrary Python code to be invoked (e.g. when a class instance
with a __del__() method is deallocated). While exceptions in such code are not propagated, the executed
code has free access to all Python global variables. This means that any object that is reachable from a global
variable should be in a consistent state before Py_DECREF() is invoked. For example, code to delete an object
from a list should copy a reference to the deleted object in a temporary variable, update the list data structure,
and then call Py_DECREF() for the temporary variable.
21
The Python/C API, Release 3.9.6
FIVE
EXCEPTION HANDLING
The functions described in this chapter will let you handle and raise Python exceptions. It is important to understand
some of the basics of Python exception handling. It works somewhat like the POSIX errno variable: there is a global
indicator (per thread) of the last error that occurred. Most C API functions don’t clear this on success, but will set it to
indicate the cause of the error on failure. Most C API functions also return an error indicator, usually NULL if they are
supposed to return a pointer, or -1 if they return an integer (exception: the PyArg_*() functions return 1 for success
and 0 for failure).
Concretely, the error indicator consists of three object pointers: the exception’s type, the exception’s value, and the
traceback object. Any of those pointers can be NULL if non-set (although some combinations are forbidden, for example
you can’t have a non-NULL traceback if the exception type is NULL).
When a function must fail because some function it called failed, it generally doesn’t set the error indicator; the function
it called already set it. It is responsible for either handling the error and clearing the exception or returning after cleaning
up any resources it holds (such as object references or memory allocations); it should not continue normally if it is not
prepared to handle the error. If returning due to an error, it is important to indicate to the caller that an error has been
set. If the error is not handled or carefully propagated, additional calls into the Python/C API may not behave as intended
and may fail in mysterious ways.
Note: The error indicator is not the result of sys.exc_info(). The former corresponds to an exception that is not
yet caught (and is therefore still propagating), while the latter returns an exception after it is caught (and has therefore
stopped propagating).
void PyErr_Clear()
Clear the error indicator. If the error indicator is not set, there is no effect.
void PyErr_PrintEx(int set_sys_last_vars)
Print a standard traceback to sys.stderr and clear the error indicator. Unless the error is a SystemExit, in
that case no traceback is printed and the Python process will exit with the error code specified by the SystemExit
instance.
Call this function only when the error indicator is set. Otherwise it will cause a fatal error!
If set_sys_last_vars is nonzero, the variables sys.last_type, sys.last_value and sys.
last_traceback will be set to the type, value and traceback of the printed exception, respectively.
void PyErr_Print()
Alias for PyErr_PrintEx(1).
23
The Python/C API, Release 3.9.6
These functions help you set the current thread’s error indicator. For convenience, some of these functions will always
return a NULL pointer for use in a return statement.
void PyErr_SetString(PyObject *type, const char *message)
This is the most common way to set the error indicator. The first argument specifies the exception type; it is
normally one of the standard exceptions, e.g. PyExc_RuntimeError. You need not increment its reference
count. The second argument is an error message; it is decoded from 'utf-8'.
void PyErr_SetObject(PyObject *type, PyObject *value)
This function is similar to PyErr_SetString() but lets you specify an arbitrary Python object for the “value”
of the exception.
PyObject* PyErr_Format(PyObject *exception, const char *format, ...)
Return value: Always NULL. This function sets the error indicator and returns NULL. exception should be a Python
exception class. The format and subsequent parameters help format the error message; they have the same meaning
and values as in PyUnicode_FromFormat(). format is an ASCII-encoded string.
PyObject* PyErr_FormatV(PyObject *exception, const char *format, va_list vargs)
Return value: Always NULL. Same as PyErr_Format(), but taking a va_list argument rather than a variable
number of arguments.
New in version 3.5.
void PyErr_SetNone(PyObject *type)
This is a shorthand for PyErr_SetObject(type, Py_None).
int PyErr_BadArgument()
This is a shorthand for PyErr_SetString(PyExc_TypeError, message), where message indicates
that a built-in operation was invoked with an illegal argument. It is mostly for internal use.
PyObject* PyErr_NoMemory()
Return value: Always NULL. This is a shorthand for PyErr_SetNone(PyExc_MemoryError); it returns
NULL so an object allocation function can write return PyErr_NoMemory(); when it runs out of memory.
PyObject* PyErr_SetFromErrno(PyObject *type)
Return value: Always NULL. This is a convenience function to raise an exception when a C library function
has returned an error and set the C variable errno. It constructs a tuple object whose first item is the inte-
ger errno value and whose second item is the corresponding error message (gotten from strerror()), and
then calls PyErr_SetObject(type, object). On Unix, when the errno value is EINTR, indicating
an interrupted system call, this calls PyErr_CheckSignals(), and if that set the error indicator, leaves it
set to that. The function always returns NULL, so a wrapper function around a system call can write return
PyErr_SetFromErrno(type); when the system call returns an error.
Use these functions to issue warnings from C code. They mirror similar functions exported by the Python warnings
module. They normally print a warning message to sys.stderr; however, it is also possible that the user has specified that
warnings are to be turned into errors, and in that case they will raise an exception. It is also possible that the functions
raise an exception because of a problem with the warning machinery. The return value is 0 if no exception is raised, or
-1 if an exception is raised. (It is not possible to determine whether a warning message is actually printed, nor what the
reason is for the exception; this is intentional.) If an exception is raised, the caller should do its normal exception handling
(for example, Py_DECREF() owned references and return an error value).
int PyErr_WarnEx(PyObject *category, const char *message, Py_ssize_t stack_level)
Issue a warning message. The category argument is a warning category (see below) or NULL; the message argument
is a UTF-8 encoded string. stack_level is a positive number giving a number of stack frames; the warning will be
issued from the currently executing line of code in that stack frame. A stack_level of 1 is the function calling
PyErr_WarnEx(), 2 is the function above that, and so forth.
Warning categories must be subclasses of PyExc_Warning; PyExc_Warning is a subclass of
PyExc_Exception; the default warning category is PyExc_RuntimeWarning. The standard Python warn-
ing categories are available as global variables whose names are enumerated at Standard Warning Categories.
For information about warning control, see the documentation for the warnings module and the -W option in
the command line documentation. There is no C API for warning control.
PyObject* PyErr_SetImportErrorSubclass(PyObject *exception, PyObject *msg, PyObject *name, Py-
Object *path)
Return value: Always NULL. Much like PyErr_SetImportError() but this function allows for specifying a
subclass of ImportError to raise.
New in version 3.6.
PyObject* PyErr_Occurred()
Return value: Borrowed reference. Test whether the error indicator is set. If set, return the exception type (the first
argument to the last call to one of the PyErr_Set*() functions or to PyErr_Restore()). If not set, return
NULL. You do not own a reference to the return value, so you do not need to Py_DECREF() it.
The caller must hold the GIL.
Note: Do not compare the return value to a specific exception; use PyErr_ExceptionMatches() instead,
shown below. (The comparison could easily fail since the exception may be an instance instead of a class, in the
case of a class exception, or it may be a subclass of the expected exception.)
Note: This function is normally only used by code that needs to catch exceptions or by code that needs to save
and restore the error indicator temporarily, e.g.:
{
PyObject *type, *value, *traceback;
PyErr_Fetch(&type, &value, &traceback);
Note: This function is normally only used by code that needs to save and restore the error indicator temporarily.
Use PyErr_Fetch() to save the current error indicator.
Note: This function does not implicitly set the __traceback__ attribute on the exception value. If setting the
traceback appropriately is desired, the following additional snippet is needed:
if (tb != NULL) {
PyException_SetTraceback(val, tb);
}
Note: This function is not normally used by code that wants to handle exceptions. Rather, it can be used when
code needs to save and restore the exception state temporarily. Use PyErr_SetExcInfo() to restore or clear
the exception state.
Note: This function is not normally used by code that wants to handle exceptions. Rather, it can be used when code
needs to save and restore the exception state temporarily. Use PyErr_GetExcInfo() to read the exception
state.
int PyErr_CheckSignals()
This function interacts with Python’s signal handling. It checks whether a signal has been sent to the processes
and if so, invokes the corresponding signal handler. If the signal module is supported, this can invoke a signal
handler written in Python. In all cases, the default effect for SIGINT is to raise the KeyboardInterrupt
exception. If an exception is raised the error indicator is set and the function returns -1; otherwise the function
returns 0. The error indicator may or may not be cleared if it was previously set.
void PyErr_SetInterrupt()
Simulate the effect of a SIGINT signal arriving. The next time PyErr_CheckSignals() is called, the Python
signal handler for SIGINT will be called.
If SIGINT isn’t handled by Python (it was set to signal.SIG_DFL or signal.SIG_IGN), this function does
nothing.
int PySignal_SetWakeupFd(int fd)
This utility function specifies a file descriptor to which the signal number is written as a single byte whenever a
signal is received. fd must be non-blocking. It returns the previous such file descriptor.
The value -1 disables the feature; this is the initial state. This is equivalent to signal.set_wakeup_fd()
in Python, but without any error checking. fd should be a valid file descriptor. The function should only be called
from the main thread.
Changed in version 3.5: On Windows, the function now also supports socket handles.
The following functions are used to create and modify Unicode exceptions from C.
PyObject* PyUnicodeDecodeError_Create(const char *encoding, const char *object, Py_ssize_t length,
Py_ssize_t start, Py_ssize_t end, const char *reason)
Return value: New reference. Create a UnicodeDecodeError object with the attributes encoding, object, length,
start, end and reason. encoding and reason are UTF-8 encoded strings.
PyObject* PyUnicodeEncodeError_Create(const char *encoding, const Py_UNICODE *object,
Py_ssize_t length, Py_ssize_t start, Py_ssize_t end, const
char *reason)
Return value: New reference. Create a UnicodeEncodeError object with the attributes encoding, object, length,
start, end and reason. encoding and reason are UTF-8 encoded strings.
Deprecated since version 3.3: 3.11
Py_UNICODE is deprecated since Python 3.3. Please migrate to PyObject_CallFunction(PyExc_UnicodeEncodeErro
"sOnns", ...).
PyObject* PyUnicodeTranslateError_Create(const Py_UNICODE *object, Py_ssize_t length,
Py_ssize_t start, Py_ssize_t end, const char *reason)
Return value: New reference. Create a UnicodeTranslateError object with the attributes object, length,
start, end and reason. reason is a UTF-8 encoded string.
Deprecated since version 3.3: 3.11
Py_UNICODE is deprecated since Python 3.3. Please migrate to PyObject_CallFunction(PyExc_UnicodeTranslateE
"Onns", ...).
PyObject* PyUnicodeDecodeError_GetEncoding(PyObject *exc)
These two functions provide a way to perform safe recursive calls at the C level, both in the core and in extension mod-
ules. They are needed if the recursive code does not necessarily invoke Python code (which tracks its recursion depth
automatically). They are also not needed for tp_call implementations because the call protocol takes care of recursion
handling.
int Py_EnterRecursiveCall(const char *where)
Marks a point where a recursive C-level call is about to be performed.
If USE_STACKCHECK is defined, this function checks if the OS stack overflowed using PyOS_CheckStack().
In this is the case, it sets a MemoryError and returns a nonzero value.
The function then checks if the recursion limit is reached. If this is the case, a RecursionError is set and a
nonzero value is returned. Otherwise, zero is returned.
where should be a UTF-8 encoded string such as " in instance check" to be concatenated to the
RecursionError message caused by the recursion depth limit.
Changed in version 3.9: This function is now also available in the limited API.
void Py_LeaveRecursiveCall(void)
Ends a Py_EnterRecursiveCall(). Must be called once for each successful invocation of
Py_EnterRecursiveCall().
Changed in version 3.9: This function is now also available in the limited API.
Properly implementing tp_repr for container types requires special recursion handling. In addition to protecting the
stack, tp_repr also needs to track objects to prevent cycles. The following two functions facilitate this functionality.
Effectively, these are the C equivalent to reprlib.recursive_repr().
int Py_ReprEnter(PyObject *object)
Called at the beginning of the tp_repr implementation to detect cycles.
If the object has already been processed, the function returns a positive integer. In that case the tp_repr imple-
mentation should return a string object indicating a cycle. As examples, dict objects return {...} and list
objects return [...].
The function will return a negative integer if the recursion limit is reached. In that case the tp_repr implemen-
tation should typically return NULL.
Otherwise, the function returns zero and the tp_repr implementation can continue normally.
void Py_ReprLeave(PyObject *object)
Ends a Py_ReprEnter(). Must be called once for each invocation of Py_ReprEnter() that returns zero.
All standard Python exceptions are available as global variables whose names are PyExc_ followed by the Python ex-
ception name. These have the type PyObject*; they are all class objects. For completeness, here are all the variables:
C Name Notes
PyExc_EnvironmentError
PyExc_IOError
PyExc_WindowsError (3)
Notes:
(1) This is a base class for other standard exceptions.
(2) Only defined on Windows; protect code that uses this by testing that the preprocessor macro MS_WINDOWS is
defined.
All standard Python warning categories are available as global variables whose names are PyExc_ followed by the Python
exception name. These have the type PyObject*; they are all class objects. For completeness, here are all the variables:
SIX
UTILITIES
The functions in this chapter perform various utility tasks, ranging from helping C code be more portable across platforms,
using Python modules from C, and parsing function arguments and constructing Python values from C values.
Warning: The C fork() call should only be made from the “main” thread (of the “main” interpreter). The
same is true for PyOS_BeforeFork().
Warning: The C fork() call should only be made from the “main” thread (of the “main” interpreter). The
same is true for PyOS_AfterFork_Parent().
35
The Python/C API, Release 3.9.6
void PyOS_AfterFork_Child()
Function to update internal interpreter state after a process fork. This must be called from the child process after
calling fork(), or any similar function that clones the current process, if there is any chance the process will call
back into the Python interpreter. Only available on systems where fork() is defined.
Warning: The C fork() call should only be made from the “main” thread (of the “main” interpreter). The
same is true for PyOS_AfterFork_Child().
36 Chapter 6. Utilities
The Python/C API, Release 3.9.6
Use the Py_EncodeLocale() function to encode the character string back to a byte string.
See also:
The PyUnicode_DecodeFSDefaultAndSize() and PyUnicode_DecodeLocaleAndSize()
functions.
New in version 3.5.
Changed in version 3.7: The function now uses the UTF-8 encoding in the UTF-8 mode.
Changed in version 3.8: The function now uses the UTF-8 encoding on Windows if
Py_LegacyWindowsFSEncodingFlag is zero;
char* Py_EncodeLocale(const wchar_t *text, size_t *error_pos)
Encode a wide character string to the locale encoding with the surrogateescape error handler: surrogate characters
in the range U+DC80..U+DCFF are converted to bytes 0x80..0xFF.
Encoding, highest priority to lowest priority:
• UTF-8 on macOS, Android, and VxWorks;
• UTF-8 on Windows if Py_LegacyWindowsFSEncodingFlag is zero;
• UTF-8 if the Python UTF-8 mode is enabled;
• ASCII if the LC_CTYPE locale is "C", nl_langinfo(CODESET) returns the ASCII encoding (or an
alias), and mbstowcs() and wcstombs() functions uses the ISO-8859-1 encoding.
• the current locale encoding.
The function uses the UTF-8 encoding in the Python UTF-8 mode.
Return a pointer to a newly allocated byte string, use PyMem_Free() to free the memory. Return NULL on
encoding error or memory allocation error
If error_pos is not NULL, *error_pos is set to (size_t)-1 on success, or set to the index of the invalid
character on encoding error.
Use the Py_DecodeLocale() function to decode the bytes string back to a wide character string.
See also:
The PyUnicode_EncodeFSDefault() and PyUnicode_EncodeLocale() functions.
New in version 3.5.
Changed in version 3.7: The function now uses the UTF-8 encoding in the UTF-8 mode.
Changed in version 3.8: The function now uses the UTF-8 encoding on Windows if
Py_LegacyWindowsFSEncodingFlag is zero;
These are utility functions that make functionality from the sys module accessible to C code. They all work with the
current interpreter thread’s sys module’s dict, which is contained in the internal thread state structure.
PyObject *PySys_GetObject(const char *name)
Return value: Borrowed reference. Return the object name from the sys module or NULL if it does not exist,
without setting an exception.
int PySys_SetObject(const char *name, PyObject *v)
Set name in the sys module to v unless v is NULL, in which case name is deleted from the sys module. Returns 0
on success, -1 on error.
void PySys_ResetWarnOptions()
Reset sys.warnoptions to an empty list. This function may be called prior to Py_Initialize().
void PySys_AddWarnOption(const wchar_t *s)
Append s to sys.warnoptions. This function must be called prior to Py_Initialize() in order to affect
the warnings filter list.
void PySys_AddWarnOptionUnicode(PyObject *unicode)
Append unicode to sys.warnoptions.
Note: this function is not currently usable from outside the CPython implementation, as it must be called prior to
the implicit import of warnings in Py_Initialize() to be effective, but can’t be called until enough of the
runtime has been initialized to permit the creation of Unicode objects.
void PySys_SetPath(const wchar_t *path)
Set sys.path to a list object of paths found in path which should be a list of paths separated with the platform’s
search path delimiter (: on Unix, ; on Windows).
void PySys_WriteStdout(const char *format, ...)
Write the output string described by format to sys.stdout. No exceptions are raised, even if truncation occurs
(see below).
format should limit the total size of the formatted output string to 1000 bytes or less – after 1000 bytes, the output
string is truncated. In particular, this means that no unrestricted “%s” formats should occur; these should be limited
using “%.<N>s” where <N> is a decimal number calculated so that <N> plus the maximum size of other formatted
text does not exceed 1000 bytes. Also watch out for “%f”, which can print hundreds of digits for very large numbers.
If a problem occurs, or sys.stdout is unset, the formatted message is written to the real (C level) stdout.
void PySys_WriteStderr(const char *format, ...)
As PySys_WriteStdout(), but write to sys.stderr or stderr instead.
void PySys_FormatStdout(const char *format, ...)
Function similar to PySys_WriteStdout() but format the message using PyUnicode_FromFormatV() and
don’t truncate the message to an arbitrary length.
New in version 3.2.
void PySys_FormatStderr(const char *format, ...)
As PySys_FormatStdout(), but write to sys.stderr or stderr instead.
New in version 3.2.
void PySys_AddXOption(const wchar_t *s)
Parse s as a set of -X options and add them to the current options mapping as returned by
PySys_GetXOptions(). This function may be called prior to Py_Initialize().
New in version 3.2.
PyObject *PySys_GetXOptions()
Return value: Borrowed reference. Return the current dictionary of -X options, similarly to sys._xoptions.
On error, NULL is returned and an exception is set.
New in version 3.2.
int PySys_Audit(const char *event, const char *format, ...)
Raise an auditing event with any active hooks. Return zero for success and non-zero with an exception set on failure.
If any hooks have been added, format and other arguments will be used to construct a tuple to pass. Apart from
N, the same format characters as used in Py_BuildValue() are available. If the built value is not a tuple, it
will be added into a single-element tuple. (The N format option consumes a reference, but since there is no way to
know whether arguments to this function will be consumed, using it may cause reference leaks.)
38 Chapter 6. Utilities
The Python/C API, Release 3.9.6
Note that # format characters should always be treated as Py_ssize_t, regardless of whether
PY_SSIZE_T_CLEAN was defined.
sys.audit() performs the same function from Python code.
New in version 3.8.
Changed in version 3.8.2: Require Py_ssize_t for # format characters. Previously, an unavoidable deprecation
warning was raised.
int PySys_AddAuditHook(Py_AuditHookFunction hook, void *userData)
Append the callable hook to the list of active auditing hooks. Return zero for success and non-zero on failure.
If the runtime has been initialized, also set an error on failure. Hooks added through this API are called for all
interpreters created by the runtime.
The userData pointer is passed into the hook function. Since hook functions may be called from different runtimes,
this pointer should not refer directly to Python state.
This function is safe to call before Py_Initialize(). When called after runtime initialization, existing audit
hooks are notified and may silently abort the operation by raising an error subclassed from Exception (other
errors will not be silenced).
The hook function is of type int (*)(const char *event, PyObject *args, void
*userData), where args is guaranteed to be a PyTupleObject. The hook function is always called
with the GIL held by the Python interpreter that raised the event.
See PEP 578 for a detailed description of auditing. Functions in the runtime and standard library that raise events
are listed in the audit events table. Details are in each function’s documentation.
If the interpreter is initialized, this function raises a auditing event sys.addaudithook with no arguments. If
any existing hooks raise an exception derived from Exception, the new hook will not be added and the exception
is cleared. As a result, callers cannot assume that their hook has been added unless they control all existing hooks.
New in version 3.8.
40 Chapter 6. Utilities
The Python/C API, Release 3.9.6
Note: This function does not load or import the module; if the module wasn’t already loaded, you will get an
empty module object. Use PyImport_ImportModule() or one of its variants to import a module. Package
structures implied by a dotted name for name are not created if not already present.
Changed in version 3.3: Uses imp.source_from_cache() in calculating the source path if only the bytecode
path is provided.
long PyImport_GetMagicNumber()
Return the magic number for Python bytecode files (a.k.a. .pyc file). The magic number should be present in the
first four bytes of the bytecode file, in little-endian byte order. Returns -1 on error.
Changed in version 3.3: Return value of -1 upon failure.
const char * PyImport_GetMagicTag()
Return the magic tag string for PEP 3147 format Python bytecode file names. Keep in mind that the value at
sys.implementation.cache_tag is authoritative and should be used instead of this function.
New in version 3.2.
PyObject* PyImport_GetModuleDict()
Return value: Borrowed reference. Return the dictionary used for the module administration (a.k.a. sys.
modules). Note that this is a per-interpreter variable.
PyObject* PyImport_GetModule(PyObject *name)
Return value: New reference. Return the already imported module with the given name. If the module has not been
imported yet then returns NULL but does not set an error. Returns NULL and sets an error if the lookup failed.
New in version 3.7.
PyObject* PyImport_GetImporter(PyObject *path)
Return value: New reference. Return a finder object for a sys.path/pkg.__path__ item path, possibly by
fetching it from the sys.path_importer_cache dict. If it wasn’t yet cached, traverse sys.path_hooks
until a hook is found that can handle the path item. Return None if no hook could; this tells our caller that the
path based finder could not find a finder for this path item. Cache the result in sys.path_importer_cache.
Return a new reference to the finder object.
int PyImport_ImportFrozenModuleObject(PyObject *name)
Return value: New reference. Load a frozen module named name. Return 1 for success, 0 if the module is not
found, and -1 with an exception set if the initialization failed. To access the imported module on a successful load,
use PyImport_ImportModule(). (Note the misnomer — this function would reload the module if it was
already imported.)
New in version 3.3.
Changed in version 3.4: The __file__ attribute is no longer set on the module.
int PyImport_ImportFrozenModule(const char *name)
Similar to PyImport_ImportFrozenModuleObject(), but the name is a UTF-8 encoded string instead
of a Unicode object.
struct _frozen
This is the structure type definition for frozen module descriptors, as generated by the freeze utility (see Tools/
freeze/ in the Python source distribution). Its definition, found in Include/import.h, is:
struct _frozen {
const char *name;
const unsigned char *code;
int size;
};
42 Chapter 6. Utilities
The Python/C API, Release 3.9.6
struct _inittab {
const char *name; /* ASCII encoded string */
PyObject* (*initfunc)(void);
};
These routines allow C code to work with serialized objects using the same data format as the marshal module. There
are functions to write data into the serialization format, and additional functions that can be used to read the data back.
Files used to store marshalled data must be opened in binary mode.
Numeric values are stored with the least significant byte first.
The module supports two versions of the data format: version 0 is the historical version, version 1 shares interned strings in
the file, and upon unmarshalling. Version 2 uses a binary format for floating point numbers. Py_MARSHAL_VERSION
indicates the current file format (currently 2).
void PyMarshal_WriteLongToFile(long value, FILE *file, int version)
Marshal a long integer, value, to file. This will only write the least-significant 32 bits of value; regardless of the
size of the native long type. version indicates the file format.
void PyMarshal_WriteObjectToFile(PyObject *value, FILE *file, int version)
Marshal a Python object, value, to file. version indicates the file format.
PyObject* PyMarshal_WriteObjectToString(PyObject *value, int version)
Return value: New reference. Return a bytes object containing the marshalled representation of value. version
indicates the file format.
The following functions allow marshalled values to be read back in.
long PyMarshal_ReadLongFromFile(FILE *file)
Return a C long from the data stream in a FILE* opened for reading. Only a 32-bit value can be read in using
this function, regardless of the native size of long.
On error, sets the appropriate exception (EOFError) and returns -1.
These functions are useful when creating your own extensions functions and methods. Additional information and exam-
ples are available in extending-index.
The first three of these functions described, PyArg_ParseTuple(), PyArg_ParseTupleAndKeywords(),
and PyArg_Parse(), all use format strings which are used to tell the function about the expected arguments. The
format strings use the same syntax for each of these functions.
A format string consists of zero or more “format units.” A format unit describes one Python object; it is usually a single
character or a parenthesized sequence of format units. With a few exceptions, a format unit that is not a parenthesized
sequence normally corresponds to a single address argument to these functions. In the following description, the quoted
form is the format unit; the entry in (round) parentheses is the Python object type that matches the format unit; and the
entry in [square] brackets is the type of the C variable(s) whose address should be passed.
These formats allow accessing an object as a contiguous chunk of memory. You don’t have to provide raw storage for the
returned unicode or bytes area.
In general, when a format sets a pointer to a buffer, the buffer is managed by the corresponding Python object, and the
buffer shares the lifetime of this object. You won’t have to release any memory yourself. The only exceptions are es,
es#, et and et#.
However, when a Py_buffer structure gets filled, the underlying buffer is locked so that the caller can subsequently
use the buffer even inside a Py_BEGIN_ALLOW_THREADS block without the risk of mutable data being resized or
44 Chapter 6. Utilities
The Python/C API, Release 3.9.6
destroyed. As a result, you have to call PyBuffer_Release() after you have finished processing the data (or in any
early abort case).
Unless otherwise stated, buffers are not NUL-terminated.
Some formats require a read-only bytes-like object, and set a pointer instead of a buffer structure. They work by checking
that the object’s PyBufferProcs.bf_releasebuffer field is NULL, which disallows mutable objects such as
bytearray.
Note: For all # variants of formats (s#, y#, etc.), the type of the length argument (int or Py_ssize_t) is con-
trolled by defining the macro PY_SSIZE_T_CLEAN before including Python.h. If the macro was defined, length is
a Py_ssize_t rather than an int. This behavior will change in a future Python version to only support Py_ssize_t
and drop int support. It is best to always define PY_SSIZE_T_CLEAN.
s (str) [const char *] Convert a Unicode object to a C pointer to a character string. A pointer to an existing string
is stored in the character pointer variable whose address you pass. The C string is NUL-terminated. The Python
string must not contain embedded null code points; if it does, a ValueError exception is raised. Unicode objects
are converted to C strings using 'utf-8' encoding. If this conversion fails, a UnicodeError is raised.
Note: This format does not accept bytes-like objects. If you want to accept filesystem paths and convert them to C
character strings, it is preferable to use the O& format with PyUnicode_FSConverter() as converter.
Changed in version 3.5: Previously, TypeError was raised when embedded null code points were encountered
in the Python string.
s* (str or bytes-like object) [Py_buffer] This format accepts Unicode objects as well as bytes-like objects. It fills a
Py_buffer structure provided by the caller. In this case the resulting C string may contain embedded NUL
bytes. Unicode objects are converted to C strings using 'utf-8' encoding.
s# (str, read-only bytes-like object) [const char *, int or Py_ssize_t] Like s*, except that it doesn’t accept mu-
table objects. The result is stored into two C variables, the first one a pointer to a C string, the second one its
length. The string may contain embedded null bytes. Unicode objects are converted to C strings using 'utf-8'
encoding.
z (str or None) [const char *] Like s, but the Python object may also be None, in which case the C pointer is set to
NULL.
z* (str, bytes-like object or None) [Py_buffer] Like s*, but the Python object may also be None, in which case the
buf member of the Py_buffer structure is set to NULL.
z# (str, read-only bytes-like object or None) [const char *, int or Py_ssize_t] Like s#, but the Python object
may also be None, in which case the C pointer is set to NULL.
y (read-only bytes-like object) [const char *] This format converts a bytes-like object to a C pointer to a character
string; it does not accept Unicode objects. The bytes buffer must not contain embedded null bytes; if it does, a
ValueError exception is raised.
Changed in version 3.5: Previously, TypeError was raised when embedded null bytes were encountered in the
bytes buffer.
y* (bytes-like object) [Py_buffer] This variant on s* doesn’t accept Unicode objects, only bytes-like objects. This is
the recommended way to accept binary data.
y# (read-only bytes-like object) [const char *, int or Py_ssize_t] This variant on s# doesn’t accept Unicode ob-
jects, only bytes-like objects.
S (bytes) [PyBytesObject *] Requires that the Python object is a bytes object, without attempting any conversion.
Raises TypeError if the object is not a bytes object. The C variable may also be declared as PyObject*.
Y (bytearray) [PyByteArrayObject *] Requires that the Python object is a bytearray object, without attempting
any conversion. Raises TypeError if the object is not a bytearray object. The C variable may also be declared
as PyObject*.
u (str) [const Py_UNICODE *] Convert a Python Unicode object to a C pointer to a NUL-terminated buffer of Uni-
code characters. You must pass the address of a Py_UNICODE pointer variable, which will be filled with the
pointer to an existing Unicode buffer. Please note that the width of a Py_UNICODE character depends on compi-
lation options (it is either 16 or 32 bits). The Python string must not contain embedded null code points; if it does,
a ValueError exception is raised.
Changed in version 3.5: Previously, TypeError was raised when embedded null code points were encountered
in the Python string.
Deprecated since version 3.3, will be removed in version 3.12: Part of the old-style Py_UNICODE API; please
migrate to using PyUnicode_AsWideCharString().
u# (str) [const Py_UNICODE *, int or Py_ssize_t] This variant on u stores into two C variables, the first one a
pointer to a Unicode data buffer, the second one its length. This variant allows null code points.
Deprecated since version 3.3, will be removed in version 3.12: Part of the old-style Py_UNICODE API; please
migrate to using PyUnicode_AsWideCharString().
Z (str or None) [const Py_UNICODE *] Like u, but the Python object may also be None, in which case the
Py_UNICODE pointer is set to NULL.
Deprecated since version 3.3, will be removed in version 3.12: Part of the old-style Py_UNICODE API; please
migrate to using PyUnicode_AsWideCharString().
Z# (str or None) [const Py_UNICODE *, int or Py_ssize_t] Like u#, but the Python object may also be
None, in which case the Py_UNICODE pointer is set to NULL.
Deprecated since version 3.3, will be removed in version 3.12: Part of the old-style Py_UNICODE API; please
migrate to using PyUnicode_AsWideCharString().
U (str) [PyObject *] Requires that the Python object is a Unicode object, without attempting any conversion. Raises
TypeError if the object is not a Unicode object. The C variable may also be declared as PyObject*.
w* (read-write bytes-like object) [Py_buffer] This format accepts any object which implements the read-write buffer
interface. It fills a Py_buffer structure provided by the caller. The buffer may contain embedded null bytes.
The caller have to call PyBuffer_Release() when it is done with the buffer.
es (str) [const char *encoding, char **buffer] This variant on s is used for encoding Unicode into a character
buffer. It only works for encoded data without embedded NUL bytes.
This format requires two arguments. The first is only used as input, and must be a const char* which points
to the name of an encoding as a NUL-terminated string, or NULL, in which case 'utf-8' encoding is used. An
exception is raised if the named encoding is not known to Python. The second argument must be a char**; the
value of the pointer it references will be set to a buffer with the contents of the argument text. The text will be
encoded in the encoding specified by the first argument.
PyArg_ParseTuple() will allocate a buffer of the needed size, copy the encoded data into this buffer and
adjust *buffer to reference the newly allocated storage. The caller is responsible for calling PyMem_Free() to
free the allocated buffer after use.
et (str, bytes or bytearray) [const char *encoding, char **buffer] Same as es except that byte string objects
are passed through without recoding them. Instead, the implementation assumes that the byte string object uses the
encoding passed in as parameter.
es# (str) [const char *encoding, char **buffer, int or Py_ssize_t *buffer_length] This variant on s# is used
for encoding Unicode into a character buffer. Unlike the es format, this variant allows input data which contains
NUL characters.
46 Chapter 6. Utilities
The Python/C API, Release 3.9.6
It requires three arguments. The first is only used as input, and must be a const char* which points to the name
of an encoding as a NUL-terminated string, or NULL, in which case 'utf-8' encoding is used. An exception is
raised if the named encoding is not known to Python. The second argument must be a char**; the value of the
pointer it references will be set to a buffer with the contents of the argument text. The text will be encoded in the
encoding specified by the first argument. The third argument must be a pointer to an integer; the referenced integer
will be set to the number of bytes in the output buffer.
There are two modes of operation:
If *buffer points a NULL pointer, the function will allocate a buffer of the needed size, copy the encoded data
into this buffer and set *buffer to reference the newly allocated storage. The caller is responsible for calling
PyMem_Free() to free the allocated buffer after usage.
If *buffer points to a non-NULL pointer (an already allocated buffer), PyArg_ParseTuple() will use this
location as the buffer and interpret the initial value of *buffer_length as the buffer size. It will then copy the
encoded data into the buffer and NUL-terminate it. If the buffer is not large enough, a ValueError will be set.
In both cases, *buffer_length is set to the length of the encoded data without the trailing NUL byte.
et# (str, bytes or bytearray) [const char *encoding, char **buffer, int or Py_ssize_t *buffer_length]
Same as es# except that byte string objects are passed through without recoding them. Instead, the implementation
assumes that the byte string object uses the encoding passed in as parameter.
Numbers
b (int) [unsigned char] Convert a nonnegative Python integer to an unsigned tiny int, stored in a C unsigned
char.
B (int) [unsigned char] Convert a Python integer to a tiny int without overflow checking, stored in a C unsigned
char.
h (int) [short int] Convert a Python integer to a C short int.
H (int) [unsigned short int] Convert a Python integer to a C unsigned short int, without overflow checking.
i (int) [int] Convert a Python integer to a plain C int.
I (int) [unsigned int] Convert a Python integer to a C unsigned int, without overflow checking.
l (int) [long int] Convert a Python integer to a C long int.
k (int) [unsigned long] Convert a Python integer to a C unsigned long without overflow checking.
L (int) [long long] Convert a Python integer to a C long long.
K (int) [unsigned long long] Convert a Python integer to a C unsigned long long without overflow checking.
n (int) [Py_ssize_t] Convert a Python integer to a C Py_ssize_t.
c (bytes or bytearray of length 1) [char] Convert a Python byte, represented as a bytes or bytearray object
of length 1, to a C char.
Changed in version 3.3: Allow bytearray objects.
C (str of length 1) [int] Convert a Python character, represented as a str object of length 1, to a C int.
f (float) [float] Convert a Python floating point number to a C float.
d (float) [double] Convert a Python floating point number to a C double.
D (complex) [Py_complex] Convert a Python complex number to a C Py_complex structure.
Other objects
O (object) [PyObject *] Store a Python object (without any conversion) in a C object pointer. The C program thus
receives the actual object that was passed. The object’s reference count is not increased. The pointer stored is not
NULL.
O! (object) [typeobject, PyObject *] Store a Python object in a C object pointer. This is similar to O, but takes two
C arguments: the first is the address of a Python type object, the second is the address of the C variable (of
type PyObject*) into which the object pointer is stored. If the Python object does not have the required type,
TypeError is raised.
O& (object) [converter, anything] Convert a Python object to a C variable through a converter function. This takes two
arguments: the first is a function, the second is the address of a C variable (of arbitrary type), converted to void
*. The converter function in turn is called as follows:
where object is the Python object to be converted and address is the void* argument that was passed to the
PyArg_Parse*() function. The returned status should be 1 for a successful conversion and 0 if the conversion
has failed. When the conversion fails, the converter function should raise an exception and leave the content of
address unmodified.
If the converter returns Py_CLEANUP_SUPPORTED, it may get called a second time if the argument parsing
eventually fails, giving the converter a chance to release any memory that it had already allocated. In this second
call, the object parameter will be NULL; address will have the same value as in the original call.
Changed in version 3.1: Py_CLEANUP_SUPPORTED was added.
p (bool) [int] Tests the value passed in for truth (a boolean predicate) and converts the result to its equivalent C
true/false integer value. Sets the int to 1 if the expression was true and 0 if it was false. This accepts any valid
Python value. See truth for more information about how Python tests values for truth.
New in version 3.3.
(items) (tuple) [matching-items] The object must be a Python sequence whose length is the number of format
units in items. The C arguments must correspond to the individual format units in items. Format units for sequences
may be nested.
It is possible to pass “long” integers (integers whose value exceeds the platform’s LONG_MAX) however no proper range
checking is done — the most significant bits are silently truncated when the receiving field is too small to receive the value
(actually, the semantics are inherited from downcasts in C — your mileage may vary).
A few other characters have a meaning in a format string. These may not occur inside nested parentheses. They are:
| Indicates that the remaining arguments in the Python argument list are optional. The C variables corresponding to
optional arguments should be initialized to their default value — when an optional argument is not specified,
PyArg_ParseTuple() does not touch the contents of the corresponding C variable(s).
$ PyArg_ParseTupleAndKeywords() only: Indicates that the remaining arguments in the Python argument list
are keyword-only. Currently, all keyword-only arguments must also be optional arguments, so | must always be
specified before $ in the format string.
New in version 3.3.
: The list of format units ends here; the string after the colon is used as the function name in error messages (the
“associated value” of the exception that PyArg_ParseTuple() raises).
; The list of format units ends here; the string after the semicolon is used as the error message instead of the default
error message. : and ; mutually exclude each other.
Note that any Python object references which are provided to the caller are borrowed references; do not decrement their
reference count!
48 Chapter 6. Utilities
The Python/C API, Release 3.9.6
Additional arguments passed to these functions must be addresses of variables whose type is determined by the format
string; these are used to store values from the input tuple. There are a few cases, as described in the list of format units
above, where these parameters are used as input values; they should match what is specified for the corresponding format
unit in that case.
For the conversion to succeed, the arg object must match the format and the format must be exhausted. On success,
the PyArg_Parse*() functions return true, otherwise they return false and raise an appropriate exception. When the
PyArg_Parse*() functions fail due to conversion failure in one of the format units, the variables at the addresses
corresponding to that and the following format units are left untouched.
API Functions
static PyObject *
weakref_ref(PyObject *self, PyObject *args)
{
PyObject *object;
PyObject *callback = NULL;
PyObject *result = NULL;
50 Chapter 6. Utilities
The Python/C API, Release 3.9.6
u (str) [const wchar_t *] Convert a null-terminated wchar_t buffer of Unicode (UTF-16 or UCS-4) data to
a Python Unicode object. If the Unicode buffer pointer is NULL, None is returned.
u# (str) [const wchar_t *, int or Py_ssize_t] Convert a Unicode (UTF-16 or UCS-4) data buffer and its
length to a Python Unicode object. If the Unicode buffer pointer is NULL, the length is ignored and None is
returned.
U (str or None) [const char *] Same as s.
U# (str or None) [const char *, int or Py_ssize_t] Same as s#.
i (int) [int] Convert a plain C int to a Python integer object.
b (int) [char] Convert a plain C char to a Python integer object.
h (int) [short int] Convert a plain C short int to a Python integer object.
l (int) [long int] Convert a C long int to a Python integer object.
B (int) [unsigned char] Convert a C unsigned char to a Python integer object.
H (int) [unsigned short int] Convert a C unsigned short int to a Python integer object.
I (int) [unsigned int] Convert a C unsigned int to a Python integer object.
k (int) [unsigned long] Convert a C unsigned long to a Python integer object.
L (int) [long long] Convert a C long long to a Python integer object.
K (int) [unsigned long long] Convert a C unsigned long long to a Python integer object.
n (int) [Py_ssize_t] Convert a C Py_ssize_t to a Python integer.
c (bytes of length 1) [char] Convert a C int representing a byte to a Python bytes object of length 1.
C (str of length 1) [int] Convert a C int representing a character to Python str object of length 1.
d (float) [double] Convert a C double to a Python floating point number.
f (float) [float] Convert a C float to a Python floating point number.
D (complex) [Py_complex *] Convert a C Py_complex structure to a Python complex number.
O (object) [PyObject *] Pass a Python object untouched (except for its reference count, which is incremented by
one). If the object passed in is a NULL pointer, it is assumed that this was caused because the call producing
the argument found an error and set an exception. Therefore, Py_BuildValue() will return NULL but
won’t raise an exception. If no exception has been raised yet, SystemError is set.
S (object) [PyObject *] Same as O.
N (object) [PyObject *] Same as O, except it doesn’t increment the reference count on the object. Useful when
the object is created by a call to an object constructor in the argument list.
O& (object) [converter, anything] Convert anything to a Python object through a converter function. The function
is called with anything (which should be compatible with void*) as its argument and should return a “new”
Python object, or NULL if an error occurred.
(items) (tuple) [matching-items] Convert a sequence of C values to a Python tuple with the same number
of items.
[items] (list) [matching-items] Convert a sequence of C values to a Python list with the same number of
items.
{items} (dict) [matching-items] Convert a sequence of C values to a Python dictionary. Each pair of con-
secutive C values adds one item to the dictionary, serving as key and value, respectively.
If there is an error in the format string, the SystemError exception is set and NULL returned.
52 Chapter 6. Utilities
The Python/C API, Release 3.9.6
char* PyOS_double_to_string(double val, char format_code, int precision, int flags, int *ptype)
Convert a double val to a string using supplied format_code, precision, and flags.
format_code must be one of 'e', 'E', 'f', 'F', 'g', 'G' or 'r'. For 'r', the supplied precision must be 0
and is ignored. The 'r' format code specifies the standard repr() format.
flags can be zero or more of the values Py_DTSF_SIGN, Py_DTSF_ADD_DOT_0, or Py_DTSF_ALT, or-ed
together:
• Py_DTSF_SIGN means to always precede the returned string with a sign character, even if val is non-
negative.
• Py_DTSF_ADD_DOT_0 means to ensure that the returned string will not look like an integer.
• Py_DTSF_ALT means to apply “alternate” formatting rules. See the documentation for the
PyOS_snprintf() '#' specifier for details.
If ptype is non-NULL, then the value it points to will be set to one of Py_DTST_FINITE, Py_DTST_INFINITE,
or Py_DTST_NAN, signifying that val is a finite number, an infinite number, or not a number, respectively.
The return value is a pointer to buffer with the converted string or NULL if the conversion failed. The caller is
responsible for freeing the returned string by calling PyMem_Free().
New in version 3.1.
int PyOS_stricmp(const char *s1, const char *s2)
Case insensitive comparison of strings. The function works almost identically to strcmp() except that it ignores
the case.
int PyOS_strnicmp(const char *s1, const char *s2, Py_ssize_t size)
Case insensitive comparison of strings. The function works almost identically to strncmp() except that it ignores
the case.
6.8 Reflection
PyObject* PyEval_GetBuiltins(void)
Return value: Borrowed reference. Return a dictionary of the builtins in the current execution frame, or the inter-
preter of the thread state if no frame is currently executing.
PyObject* PyEval_GetLocals(void)
Return value: Borrowed reference. Return a dictionary of the local variables in the current execution frame, or
NULL if no frame is currently executing.
PyObject* PyEval_GetGlobals(void)
Return value: Borrowed reference. Return a dictionary of the global variables in the current execution frame, or
NULL if no frame is currently executing.
PyFrameObject* PyEval_GetFrame(void)
Return value: Borrowed reference. Return the current thread state’s frame, which is NULL if no frame is currently
executing.
See also PyThreadState_GetFrame().
int PyFrame_GetBack(PyFrameObject *frame)
Get the frame next outer frame.
Return a strong reference, or NULL if frame has no outer frame.
frame must not be NULL.
New in version 3.9.
6.8. Reflection 53
The Python/C API, Release 3.9.6
In the following functions, the encoding string is looked up converted to all lower-case characters, which makes encodings
looked up through this mechanism effectively case-insensitive. If no codec is found, a KeyError is set and NULL
returned.
PyObject* PyCodec_Encoder(const char *encoding)
Return value: New reference. Get an encoder function for the given encoding.
PyObject* PyCodec_Decoder(const char *encoding)
Return value: New reference. Get a decoder function for the given encoding.
54 Chapter 6. Utilities
The Python/C API, Release 3.9.6
56 Chapter 6. Utilities
CHAPTER
SEVEN
The functions in this chapter interact with Python objects regardless of their type, or with wide classes of object types
(e.g. all numerical types, or all sequence types). When used on object types for which they do not apply, they will raise a
Python exception.
It is not possible to use these functions on objects that are not properly initialized, such as a list object that has been created
by PyList_New(), but whose items have not been set to some non-NULL value yet.
PyObject* Py_NotImplemented
The NotImplemented singleton, used to signal that an operation is not implemented for the given type combi-
nation.
Py_RETURN_NOTIMPLEMENTED
Properly handle returning Py_NotImplemented from within a C function (that is, increment the reference
count of NotImplemented and return it).
int PyObject_Print(PyObject *o, FILE *fp, int flags)
Print an object o, on file fp. Returns -1 on error. The flags argument is used to enable certain printing options.
The only option currently supported is Py_PRINT_RAW; if given, the str() of the object is written instead of
the repr().
int PyObject_HasAttr(PyObject *o, PyObject *attr_name)
Returns 1 if o has the attribute attr_name, and 0 otherwise. This is equivalent to the Python expression
hasattr(o, attr_name). This function always succeeds.
Note that exceptions which occur while calling __getattr__() and __getattribute__() methods will
get suppressed. To get error reporting use PyObject_GetAttr() instead.
int PyObject_HasAttrString(PyObject *o, const char *attr_name)
Returns 1 if o has the attribute attr_name, and 0 otherwise. This is equivalent to the Python expression
hasattr(o, attr_name). This function always succeeds.
Note that exceptions which occur while calling __getattr__() and __getattribute__()
methods and creating a temporary string object will get suppressed. To get error reporting use
PyObject_GetAttrString() instead.
PyObject* PyObject_GetAttr(PyObject *o, PyObject *attr_name)
Return value: New reference. Retrieve an attribute named attr_name from object o. Returns the attribute value on
success, or NULL on failure. This is the equivalent of the Python expression o.attr_name.
PyObject* PyObject_GetAttrString(PyObject *o, const char *attr_name)
Return value: New reference. Retrieve an attribute named attr_name from object o. Returns the attribute value on
success, or NULL on failure. This is the equivalent of the Python expression o.attr_name.
57
The Python/C API, Release 3.9.6
Note: If o1 and o2 are the same object, PyObject_RichCompareBool() will always return 1 for Py_EQ and 0
for Py_NE.
Changed in version 3.2: The return type is now Py_hash_t. This is a signed integer the same size as Py_ssize_t.
Py_hash_t PyObject_HashNotImplemented(PyObject *o)
Set a TypeError indicating that type(o) is not hashable and return -1. This function receives special treatment
when stored in a tp_hash slot, allowing a type to explicitly indicate to the interpreter that it is not hashable.
int PyObject_IsTrue(PyObject *o)
Returns 1 if the object o is considered to be true, and 0 otherwise. This is equivalent to the Python expression not
not o. On failure, return -1.
int PyObject_Not(PyObject *o)
Returns 0 if the object o is considered to be true, and 1 otherwise. This is equivalent to the Python expression not
o. On failure, return -1.
PyObject* PyObject_Type(PyObject *o)
Return value: New reference. When o is non-NULL, returns a type object corresponding to the object type of object
o. On failure, raises SystemError and returns NULL. This is equivalent to the Python expression type(o).
This function increments the reference count of the return value. There’s really no reason to use this function instead
of the common expression o->ob_type, which returns a pointer of type PyTypeObject*, except when the
incremented reference count is needed.
int PyObject_TypeCheck(PyObject *o, PyTypeObject *type)
Return true if the object o is of type type or a subtype of type. Both parameters must be non-NULL.
Py_ssize_t PyObject_Size(PyObject *o)
Py_ssize_t PyObject_Length(PyObject *o)
Return the length of object o. If the object o provides either the sequence and mapping protocols, the sequence
length is returned. On error, -1 is returned. This is the equivalent to the Python expression len(o).
Py_ssize_t PyObject_LengthHint(PyObject *o, Py_ssize_t defaultvalue)
Return an estimated length for the object o. First try to return its actual length, then an estimate using
__length_hint__(), and finally return the default value. On error return -1. This is the equivalent to the
Python expression operator.length_hint(o, defaultvalue).
New in version 3.4.
PyObject* PyObject_GetItem(PyObject *o, PyObject *key)
Return value: New reference. Return element of o corresponding to the object key or NULL on failure. This is the
equivalent of the Python expression o[key].
int PyObject_SetItem(PyObject *o, PyObject *key, PyObject *v)
Map the object key to the value v. Raise an exception and return -1 on failure; return 0 on success. This is the
equivalent of the Python statement o[key] = v. This function does not steal a reference to v.
int PyObject_DelItem(PyObject *o, PyObject *key)
Remove the mapping for the object key from the object o. Return -1 on failure. This is equivalent to the Python
statement del o[key].
PyObject* PyObject_Dir(PyObject *o)
Return value: New reference. This is equivalent to the Python expression dir(o), returning a (possibly empty)
list of strings appropriate for the object argument, or NULL if there was an error. If the argument is NULL, this is
like the Python dir(), returning the names of the current locals; in this case, if no execution frame is active then
NULL is returned but PyErr_Occurred() will return false.
PyObject* PyObject_GetIter(PyObject *o)
Return value: New reference. This is equivalent to the Python expression iter(o). It returns a new iterator for
the object argument, or the object itself if the object is already an iterator. Raises TypeError and returns NULL
if the object cannot be iterated.
Instances of classes that set tp_call are callable. The signature of the slot is:
A call is made using a tuple for the positional arguments and a dict for the keyword arguments, similarly to
callable(*args, **kwargs) in Python code. args must be non-NULL (use an empty tuple if there are no
arguments) but kwargs may be NULL if there are no keyword arguments.
This convention is not only used by tp_call: tp_new and tp_init also pass arguments this way.
To call an object, use PyObject_Call() or other call API.
Warning: A class supporting vectorcall must also implement tp_call with the same semantics.
A class should not implement vectorcall if that would be slower than tp_call. For example, if the callee needs to convert
the arguments to an args tuple and kwargs dict anyway, then there is no point in implementing vectorcall.
Classes can implement the vectorcall protocol by enabling the Py_TPFLAGS_HAVE_VECTORCALL flag and setting
tp_vectorcall_offset to the offset inside the object structure where a vectorcallfunc appears. This is a pointer
to a function with the following signature:
PyObject *(*vectorcallfunc)(PyObject *callable, PyObject *const *args, size_t nargsf, PyObject *kw-
names)
• callable is the object being called.
• args is a C array consisting of the positional arguments followed by the values of the keyword arguments.
This can be NULL if there are no arguments.
• nargsf is the number of positional arguments plus possibly the PY_VECTORCALL_ARGUMENTS_OFFSET
flag. To get the actual number of positional arguments from nargsf, use PyVectorcall_NARGS().
• kwnames is a tuple containing the names of the keyword arguments; in other words, the keys of the kwargs
dict. These names must be strings (instances of str or a subclass) and they must be unique. If there are no
keyword arguments, then kwnames can instead be NULL.
PY_VECTORCALL_ARGUMENTS_OFFSET
If this flag is set in a vectorcall nargsf argument, the callee is allowed to temporarily change args[-1]. In other
words, args points to argument 1 (not 0) in the allocated vector. The callee must restore the value of args[-1]
before returning.
For PyObject_VectorcallMethod(), this flag means instead that args[0] may be changed.
Whenever they can do so cheaply (without additional allocation), callers are encouraged to use
PY_VECTORCALL_ARGUMENTS_OFFSET. Doing so will allow callables such as bound methods to make their
onward calls (which include a prepended self argument) very efficiently.
To call an object that implements vectorcall, use a call API function as with any other callable.
PyObject_Vectorcall() will usually be most efficient.
Note: In CPython 3.8, the vectorcall API and related functions were available provisionally under
names with a leading underscore: _PyObject_Vectorcall, _Py_TPFLAGS_HAVE_VECTORCALL,
_PyObject_VectorcallMethod, _PyVectorcall_Function, _PyObject_CallOneArg,
_PyObject_CallMethodNoArgs, _PyObject_CallMethodOneArg. Additionally,
PyObject_VectorcallDict was available as _PyObject_FastCallDict. The old names are still
defined as aliases of the new, non-underscored names.
Recursion Control
When using tp_call, callees do not need to worry about recursion: CPython uses Py_EnterRecursiveCall() and
Py_LeaveRecursiveCall() for calls made using tp_call.
For efficiency, this is not the case for calls done using vectorcall: the callee should use Py_EnterRecursiveCall and
Py_LeaveRecursiveCall if needed.
However, the function PyVectorcall_NARGS should be used to allow for future extensions.
This function is not part of the limited API.
New in version 3.8.
vectorcallfunc PyVectorcall_Function(PyObject *op)
If op does not support the vectorcall protocol (either because the type does not or because the specific instance
does not), return NULL. Otherwise, return the vectorcall function pointer stored in op. This function never raises
an exception.
This is mostly useful to check whether or not op supports vectorcall, which can be done by checking
PyVectorcall_Function(op) != NULL.
This function is not part of the limited API.
New in version 3.8.
PyObject* PyVectorcall_Call(PyObject *callable, PyObject *tuple, PyObject *dict)
Call callable’s vectorcallfunc with positional and keyword arguments given in a tuple and dict, respectively.
This is a specialized function, intended to be put in the tp_call slot or be used in an implementation of
tp_call. It does not check the Py_TPFLAGS_HAVE_VECTORCALL flag and it does not fall back to
tp_call.
This function is not part of the limited API.
New in version 3.8.
Various functions are available for calling a Python object. Each converts its arguments to a convention supported by the
called object – either tp_call or vectorcall. In order to do as litle conversion as possible, pick one that best fits the format
of data you have available.
The following table summarizes the available functions; please see individual documentation for details.
the caller to check this). If there are no remaining values, returns NULL with no exception set. If an error occurs
while retrieving the item, returns NULL and passes along the exception.
To write a loop which iterates over an iterator, the C code should look something like this:
if (iterator == NULL) {
/* propagate error */
}
Py_DECREF(iterator);
if (PyErr_Occurred()) {
/* propagate error */
}
else {
/* continue doing useful work */
}
Certain objects available in Python wrap access to an underlying memory array or buffer. Such objects include the built-in
bytes and bytearray, and some extension types like array.array. Third-party libraries may define their own
types for special purposes, such as image processing or numeric analysis.
While each of these types have their own semantics, they share the common characteristic of being backed by a possibly
large memory buffer. It is then desirable, in some situations, to access that buffer directly and without intermediate
copying.
Python provides such a facility at the C level in the form of the buffer protocol. This protocol has two sides:
• on the producer side, a type can export a “buffer interface” which allows objects of that type to expose information
about their underlying buffer. This interface is described in the section Buffer Object Structures;
• on the consumer side, several means are available to obtain a pointer to the raw underlying data of an object (for
example a method parameter).
Simple objects such as bytes and bytearray expose their underlying buffer in byte-oriented form. Other forms are
possible; for example, the elements exposed by an array.array can be multi-byte values.
An example consumer of the buffer interface is the write() method of file objects: any object that can export a series
of bytes through the buffer interface can be written to a file. While write() only needs read-only access to the internal
contents of the object passed to it, other methods such as readinto() need write access to the contents of their
argument. The buffer interface allows objects to selectively allow or reject exporting of read-write and read-only buffers.
There are two ways for a consumer of the buffer interface to acquire a buffer over a target object:
• call PyObject_GetBuffer() with the right parameters;
• call PyArg_ParseTuple() (or one of its siblings) with one of the y*, w* or s* format codes.
In both cases, PyBuffer_Release() must be called when the buffer isn’t needed anymore. Failure to do so could
lead to various issues such as resource leaks.
Buffer structures (or simply “buffers”) are useful as a way to expose the binary data from another object to the Python
programmer. They can also be used as a zero-copy slicing mechanism. Using their ability to reference a block of memory,
it is possible to expose any data to the Python programmer quite easily. The memory could be a large, constant array in
a C extension, it could be a raw block of memory for manipulation before passing to an operating system library, or it
could be used to pass around structured data in its native, in-memory format.
Contrary to most data types exposed by the Python interpreter, buffers are not PyObject pointers but rather simple C
structures. This allows them to be created and copied very simply. When a generic wrapper around a buffer is needed, a
memoryview object can be created.
For short instructions how to write an exporting object, see Buffer Object Structures. For obtaining a buffer, see
PyObject_GetBuffer().
Py_buffer
void *buf
A pointer to the start of the logical structure described by the buffer fields. This can be any location within
the underlying physical memory block of the exporter. For example, with negative strides the value may
point to the end of the memory block.
For contiguous arrays, the value points to the beginning of the memory block.
void *obj
A new reference to the exporting object. The reference is owned by the consumer and automatically decre-
mented and set to NULL by PyBuffer_Release(). The field is the equivalent of the return value of any
standard C-API function.
As a special case, for temporary buffers that are wrapped by PyMemoryView_FromBuffer() or
PyBuffer_FillInfo() this field is NULL. In general, exporting objects MUST NOT use this scheme.
Py_ssize_t len
product(shape) * itemsize. For contiguous arrays, this is the length of the underlying memory
block. For non-contiguous arrays, it is the length that the logical structure would have if it were copied to a
contiguous representation.
Accessing ((char *)buf)[0] up to ((char *)buf)[len-1] is only valid if the buffer has
been obtained by a request that guarantees contiguity. In most cases such a request will be PyBUF_SIMPLE
or PyBUF_WRITABLE.
int readonly
An indicator of whether the buffer is read-only. This field is controlled by the PyBUF_WRITABLE flag.
Py_ssize_t itemsize
Item size in bytes of a single element. Same as the value of struct.calcsize() called on non-NULL
format values.
Important exception: If a consumer requests a buffer without the PyBUF_FORMAT flag, format will be
set to NULL, but itemsize still has the value for the original format.
If shape is present, the equality product(shape) * itemsize == len still holds and the con-
sumer can use itemsize to navigate the buffer.
If shape is NULL as a result of a PyBUF_SIMPLE or a PyBUF_WRITABLE request, the consumer must
disregard itemsize and assume itemsize == 1.
Buffers are usually obtained by sending a buffer request to an exporting object via PyObject_GetBuffer(). Since
the complexity of the logical structure of the memory can vary drastically, the consumer uses the flags argument to specify
the exact buffer type it can handle.
All Py_buffer fields are unambiguously defined by the request type.
request-independent fields
The following fields are not influenced by flags and must always be filled in with the correct values: obj, buf, len,
itemsize, ndim.
readonly, format
PyBUF_WRITABLE
Controls the readonly field. If set, the exporter MUST provide a writable buffer or else report
failure. Otherwise, the exporter MAY provide either a read-only or writable buffer, but the choice
MUST be consistent for all consumers.
PyBUF_FORMAT
Controls the format field. If set, this field MUST be filled in correctly. Otherwise, this field MUST
be NULL.
PyBUF_WRITABLE can be |’d to any of the flags in the next section. Since PyBUF_SIMPLE is defined as 0,
PyBUF_WRITABLE can be used as a stand-alone flag to request a simple writable buffer.
PyBUF_FORMAT can be |’d to any of the flags except PyBUF_SIMPLE. The latter already implies format B (unsigned
bytes).
The flags that control the logical structure of the memory are listed in decreasing order of complexity. Note that each flag
contains all bits of the flags below it.
contiguity requests
C or Fortran contiguity can be explicitly requested, with and without stride information. Without stride information, the
buffer must be C-contiguous.
compound requests
All possible requests are fully defined by some combination of the flags in the previous section. For convenience, the
buffer protocol provides frequently used combinations as single flags.
In the following table U stands for undefined contiguity. The consumer would have to call
PyBuffer_IsContiguous() to determine contiguity.
The logical structure of NumPy-style arrays is defined by itemsize, ndim, shape and strides.
If ndim == 0, the memory location pointed to by buf is interpreted as a scalar of size itemsize. In that case, both
shape and strides are NULL.
If strides is NULL, the array is interpreted as a standard n-dimensional C-array. Otherwise, the consumer must access
an n-dimensional array as follows:
As noted above, buf can point to any location within the actual memory block. An exporter can check the validity of a
buffer with this function:
if ndim <= 0:
return ndim == 0 and not shape and not strides
if 0 in shape:
return True
In addition to the regular items, PIL-style arrays can contain pointers that must be followed in order to get to the next
element in a dimension. For example, the regular three-dimensional C-array char v[2][2][3] can also be viewed
as an array of 2 pointers to 2 two-dimensional arrays: char (*v[2])[2][3]. In suboffsets representation, those two
pointers can be embedded at the start of buf, pointing to two char x[2][3] arrays that can be located anywhere in
memory.
Here is a function that returns a pointer to the element in an N-D array pointed to by an N-dimensional index when there
are both non-NULL strides and suboffsets:
EIGHT
The functions in this chapter are specific to certain Python object types. Passing them an object of the wrong type is not
a good idea; if you receive an object from a Python program and you are not sure that it has the right type, you must
perform a type check first; for example, to check that an object is a dictionary, use PyDict_Check(). The chapter is
structured like the “family tree” of Python object types.
Warning: While the functions described in this chapter carefully check the type of the objects which are passed
in, many of them do not check for NULL being passed instead of a valid object. Allowing NULL to be passed in can
cause memory access violations and immediate termination of the interpreter.
This section describes Python type objects and the singleton object None.
PyTypeObject
The C structure of the objects used to describe built-in types.
PyTypeObject PyType_Type
This is the type object for type objects; it is the same object as type in the Python layer.
int PyType_Check(PyObject *o)
Return non-zero if the object o is a type object, including instances of types derived from the standard type object.
Return 0 in all other cases. This function always succeeds.
int PyType_CheckExact(PyObject *o)
Return non-zero if the object o is a type object, but not a subtype of the standard type object. Return 0 in all other
cases. This function always succeeds.
unsigned int PyType_ClearCache()
Clear the internal lookup cache. Return the current version tag.
unsigned long PyType_GetFlags(PyTypeObject* type)
Return the tp_flags member of type. This function is primarily meant for use with Py_LIMITED_API; the
individual flag bits are guaranteed to be stable across Python releases, but access to tp_flags itself is not part
of the limited API.
New in version 3.2.
Changed in version 3.4: The return type is now unsigned long rather than long.
81
The Python/C API, Release 3.9.6
Note: If some of the base classes implements the GC protocol and the provided type does not include the
Py_TPFLAGS_HAVE_GC in its flags, then the GC protocol will be automatically implemented from its par-
ents. On the contrary, if the type being created does include Py_TPFLAGS_HAVE_GC in its flags then it must
implement the GC protocol itself by at least implementing the tp_traverse handle.
The following functions and structs are used to create heap types.
PyObject* PyType_FromModuleAndSpec(PyObject *module, PyType_Spec *spec, PyObject *bases)
Return value: New reference. Creates and returns a heap type object from the spec (Py_TPFLAGS_HEAPTYPE).
If bases is a tuple, the created heap type contains all types contained in it as base types.
If bases is NULL, the Py_tp_bases slot is used instead. If that also is NULL, the Py_tp_base slot is used instead. If
that also is NULL, the new type derives from object.
The module argument can be used to record the module in which the new class is defined. It must be a mod-
ule object or NULL. If not NULL, the module is associated with the new type and can later be retreived with
PyType_GetModule(). The associated module is not inherited by subclasses; it must be specified for each
class individually.
This function calls PyType_Ready() on the new type.
New in version 3.9.
PyObject* PyType_FromSpecWithBases(PyType_Spec *spec, PyObject *bases)
Return value: New reference. Equivalent to PyType_FromModuleAndSpec(NULL, spec, bases).
New in version 3.3.
PyObject* PyType_FromSpec(PyType_Spec *spec)
Return value: New reference. Equivalent to PyType_FromSpecWithBases(spec, NULL).
PyType_Spec
Structure defining a type’s behavior.
const char* PyType_Spec.name
Name of the type, used to set PyTypeObject.tp_name.
int PyType_Spec.basicsize
int PyType_Spec.itemsize
Size of the instance in bytes, used to set PyTypeObject.tp_basicsize and PyTypeObject.
tp_itemsize.
int PyType_Spec.flags
Type flags, used to set PyTypeObject.tp_flags.
If the Py_TPFLAGS_HEAPTYPE flag is not set, PyType_FromSpecWithBases() sets it automati-
cally.
PyType_Slot *PyType_Spec.slots
Array of PyType_Slot structures. Terminated by the special slot value {0, NULL}.
PyType_Slot
Structure defining optional functionality of a type, containing a slot ID and a value pointer.
int PyType_Slot.slot
A slot ID.
Slot IDs are named like the field names of the structures PyTypeObject, PyNumberMethods,
PySequenceMethods, PyMappingMethods and PyAsyncMethods with an added Py_
prefix. For example, use:
• Py_tp_dealloc to set PyTypeObject.tp_dealloc
• Py_nb_add to set PyNumberMethods.nb_add
• Py_sq_length to set PySequenceMethods.sq_length
The following fields cannot be set at all using PyType_Spec and PyType_Slot:
• tp_dict
• tp_mro
• tp_cache
• tp_subclasses
• tp_weaklist
• tp_vectorcall
• tp_weaklistoffset (see PyMemberDef)
• tp_dictoffset (see PyMemberDef)
• tp_vectorcall_offset (see PyMemberDef)
The following fields cannot be set using PyType_Spec and PyType_Slot under the limited
API:
• bf_getbuffer
• bf_releasebuffer
Setting Py_tp_bases or Py_tp_base may be problematic on some platforms. To avoid issues,
use the bases argument of PyType_FromSpecWithBases() instead.
Changed in version 3.9: Slots in PyBufferProcs in may be set in the unlimited API.
void *PyType_Slot.pfunc
The desired value of the slot. In most cases, this is a pointer to a function.
May not be NULL.
Note that the PyTypeObject for None is not directly exposed in the Python/C API. Since None is a singleton, testing
for object identity (using == in C) is sufficient. There is no PyNone_Check() function for the same reason.
PyObject* Py_None
The Python None object, denoting lack of value. This object has no methods. It needs to be treated just like any
other object with respect to reference counts.
Py_RETURN_NONE
Properly handle returning Py_None from within a C function (that is, increment the reference count of None and
return it.)
Deprecated since version 3.3, will be removed in version 3.10: Part of the old-style Py_UNICODE API; please
migrate to using PyLong_FromUnicodeObject().
PyObject* PyLong_FromUnicodeObject(PyObject *u, int base)
Return value: New reference. Convert a sequence of Unicode digits in the string u to a Python integer value.
New in version 3.3.
PyObject* PyLong_FromVoidPtr(void *p)
Return value: New reference. Create a Python integer from the pointer p. The pointer value can be retrieved from
the resulting value using PyLong_AsVoidPtr().
long PyLong_AsLong(PyObject *obj)
Return a C long representation of obj. If obj is not an instance of PyLongObject, first call its __index__()
or __int__() method (if present) to convert it to a PyLongObject.
Raise OverflowError if the value of obj is out of range for a long.
Returns -1 on error. Use PyErr_Occurred() to disambiguate.
Changed in version 3.8: Use __index__() if available.
Deprecated since version 3.8: Using __int__() is deprecated.
long PyLong_AsLongAndOverflow(PyObject *obj, int *overflow)
Return a C long representation of obj. If obj is not an instance of PyLongObject, first call its __index__()
or __int__() method (if present) to convert it to a PyLongObject.
If the value of obj is greater than LONG_MAX or less than LONG_MIN, set *overflow to 1 or -1, respectively, and
return -1; otherwise, set *overflow to 0. If any other exception occurs set *overflow to 0 and return -1 as usual.
Returns -1 on error. Use PyErr_Occurred() to disambiguate.
Changed in version 3.8: Use __index__() if available.
Deprecated since version 3.8: Using __int__() is deprecated.
long long PyLong_AsLongLong(PyObject *obj)
Return a C long long representation of obj. If obj is not an instance of PyLongObject, first call its
__index__() or __int__() method (if present) to convert it to a PyLongObject.
Raise OverflowError if the value of obj is out of range for a long long.
Returns -1 on error. Use PyErr_Occurred() to disambiguate.
Changed in version 3.8: Use __index__() if available.
Deprecated since version 3.8: Using __int__() is deprecated.
long long PyLong_AsLongLongAndOverflow(PyObject *obj, int *overflow)
Return a C long long representation of obj. If obj is not an instance of PyLongObject, first call its
__index__() or __int__() method (if present) to convert it to a PyLongObject.
If the value of obj is greater than LLONG_MAX or less than LLONG_MIN, set *overflow to 1 or -1, respectively,
and return -1; otherwise, set *overflow to 0. If any other exception occurs set *overflow to 0 and return -1 as
usual.
Returns -1 on error. Use PyErr_Occurred() to disambiguate.
New in version 3.2.
Changed in version 3.8: Use __index__() if available.
Deprecated since version 3.8: Using __int__() is deprecated.
Booleans in Python are implemented as a subclass of integers. There are only two booleans, Py_False and Py_True.
As such, the normal creation and deletion functions don’t apply to booleans. The following macros are available, however.
PyFloatObject
This subtype of PyObject represents a Python floating point object.
PyTypeObject PyFloat_Type
This instance of PyTypeObject represents the Python floating point type. This is the same object as float in
the Python layer.
int PyFloat_Check(PyObject *p)
Return true if its argument is a PyFloatObject or a subtype of PyFloatObject. This function always
succeeds.
int PyFloat_CheckExact(PyObject *p)
Return true if its argument is a PyFloatObject, but not a subtype of PyFloatObject. This function always
succeeds.
PyObject* PyFloat_FromString(PyObject *str)
Return value: New reference. Create a PyFloatObject object based on the string value in str, or NULL on
failure.
PyObject* PyFloat_FromDouble(double v)
Return value: New reference. Create a PyFloatObject object from v, or NULL on failure.
double PyFloat_AsDouble(PyObject *pyfloat)
Return a C double representation of the contents of pyfloat. If pyfloat is not a Python floating point object but
has a __float__() method, this method will first be called to convert pyfloat into a float. If __float__()
is not defined then it falls back to __index__(). This method returns -1.0 upon failure, so one should call
PyErr_Occurred() to check for errors.
Python’s complex number objects are implemented as two distinct types when viewed from the C API: one is the Python
object exposed to Python programs, and the other is a C structure which represents the actual complex number value.
The API provides functions for working with both.
Note that the functions which accept these structures as parameters and return them as results do so by value rather than
dereferencing them through pointers. This is consistent throughout the API.
Py_complex
The C structure which corresponds to the value portion of a Python complex number object. Most of the functions
for dealing with complex number objects use structures of this type as input or output values, as appropriate. It is
defined as:
typedef struct {
double real;
double imag;
} Py_complex;
PyComplexObject
This subtype of PyObject represents a Python complex number object.
PyTypeObject PyComplex_Type
This instance of PyTypeObject represents the Python complex number type. It is the same object as complex
in the Python layer.
int PyComplex_Check(PyObject *p)
Return true if its argument is a PyComplexObject or a subtype of PyComplexObject. This function always
succeeds.
int PyComplex_CheckExact(PyObject *p)
Return true if its argument is a PyComplexObject, but not a subtype of PyComplexObject. This function
always succeeds.
PyObject* PyComplex_FromCComplex(Py_complex v)
Return value: New reference. Create a new Python complex number object from a C Py_complex value.
PyObject* PyComplex_FromDoubles(double real, double imag)
Return value: New reference. Return a new PyComplexObject object from real and imag.
double PyComplex_RealAsDouble(PyObject *op)
Return the real part of op as a C double.
double PyComplex_ImagAsDouble(PyObject *op)
Return the imaginary part of op as a C double.
Py_complex PyComplex_AsCComplex(PyObject *op)
Return the Py_complex value of the complex number op.
If op is not a Python complex number object but has a __complex__() method, this method will first be
called to convert op to a Python complex number object. If __complex__() is not defined then it falls back
to __float__(). If __float__() is not defined then it falls back to __index__(). Upon failure, this
method returns -1.0 as a real value.
Changed in version 3.8: Use __index__() if available.
Generic operations on sequence objects were discussed in the previous chapter; this section deals with the specific kinds
of sequence objects that are intrinsic to the Python language.
These functions raise TypeError when expecting a bytes parameter and are called with a non-bytes parameter.
PyBytesObject
This subtype of PyObject represents a Python bytes object.
PyTypeObject PyBytes_Type
This instance of PyTypeObject represents the Python bytes type; it is the same object as bytes in the Python
layer.
int PyBytes_Check(PyObject *o)
Return true if the object o is a bytes object or an instance of a subtype of the bytes type. This function always
succeeds.
An unrecognized format character causes all the rest of the format string to be copied as-is to the result object, and
any extra arguments discarded.
PyObject* PyBytes_FromFormatV(const char *format, va_list vargs)
Return value: New reference. Identical to PyBytes_FromFormat() except that it takes exactly two arguments.
PyObject* PyBytes_FromObject(PyObject *o)
Return value: New reference. Return the bytes representation of object o that implements the buffer protocol.
Py_ssize_t PyBytes_Size(PyObject *o)
Return the length of the bytes in bytes object o.
Py_ssize_t PyBytes_GET_SIZE(PyObject *o)
Macro form of PyBytes_Size() but without error checking.
char* PyBytes_AsString(PyObject *o)
Return a pointer to the contents of o. The pointer refers to the internal buffer of o, which consists of len(o) + 1
bytes. The last byte in the buffer is always null, regardless of whether there are any other null bytes. The data must
not be modified in any way, unless the object was just created using PyBytes_FromStringAndSize(NULL,
size). It must not be deallocated. If o is not a bytes object at all, PyBytes_AsString() returns NULL and
raises TypeError.
1 For integer specifiers (d, u, ld, lu, zd, zu, i, x): the 0-conversion flag has effect even when a precision is given.
PyByteArrayObject
This subtype of PyObject represents a Python bytearray object.
PyTypeObject PyByteArray_Type
This instance of PyTypeObject represents the Python bytearray type; it is the same object as bytearray in
the Python layer.
Macros
These macros trade safety for speed and they don’t check pointers.
char* PyByteArray_AS_STRING(PyObject *bytearray)
Macro version of PyByteArray_AsString().
Py_ssize_t PyByteArray_GET_SIZE(PyObject *bytearray)
Macro version of PyByteArray_Size().
Unicode Objects
Since the implementation of PEP 393 in Python 3.3, Unicode objects internally use a variety of representations, in order
to allow handling the complete range of Unicode characters while staying memory efficient. There are special cases for
strings where all code points are below 128, 256, or 65536; otherwise, code points must be below 1114112 (which is the
full Unicode range).
Py_UNICODE* and UTF-8 representations are created on demand and cached in the Unicode object. The
Py_UNICODE* representation is deprecated and inefficient.
Due to the transition between the old APIs and the new APIs, Unicode objects can internally be in two states depending
on how they were created:
• “canonical” Unicode objects are all objects created by a non-deprecated Unicode API. They use the most efficient
representation allowed by the implementation.
• “legacy” Unicode objects have been created through one of the deprecated APIs (typically
PyUnicode_FromUnicode()) and only bear the Py_UNICODE* representation; you will have to call
PyUnicode_READY() on them before calling any other API.
Note: The “legacy” Unicode object will be removed in Python 3.12 with deprecated APIs. All Unicode objects will be
“canonical” since then. See PEP 623 for more information.
Unicode Type
These are the basic Unicode object types used for the Unicode implementation in Python:
Py_UCS4
Py_UCS2
Py_UCS1
These types are typedefs for unsigned integer types wide enough to contain characters of 32 bits, 16 bits and 8 bits,
respectively. When dealing with single Unicode characters, use Py_UCS4.
New in version 3.3.
Py_UNICODE
This is a typedef of wchar_t, which is a 16-bit type or 32-bit type depending on the platform.
Changed in version 3.3: In previous versions, this was a 16-bit type or a 32-bit type depending on whether you
selected a “narrow” or “wide” Unicode version of Python at build time.
PyASCIIObject
PyCompactUnicodeObject
PyUnicodeObject
These subtypes of PyObject represent a Python Unicode object. In almost all cases, they shouldn’t be used
directly, since all API functions that deal with Unicode objects take and return PyObject pointers.
New in version 3.3.
PyTypeObject PyUnicode_Type
This instance of PyTypeObject represents the Python Unicode type. It is exposed to Python code as str.
The following APIs are really C macros and can be used to do fast checks and to access internal read-only data of Unicode
objects:
int PyUnicode_Check(PyObject *o)
Return true if the object o is a Unicode object or an instance of a Unicode subtype. This function always succeeds.
int PyUnicode_CheckExact(PyObject *o)
Return true if the object o is a Unicode object, but not an instance of a subtype. This function always succeeds.
int PyUnicode_READY(PyObject *o)
Ensure the string object o is in the “canonical” representation. This is required before using any of the access
macros described below.
Returns 0 on success and -1 with an exception set on failure, which in particular happens if memory allocation
fails.
New in version 3.3.
Deprecated since version 3.10, will be removed in version 3.12: This API will be removed with
PyUnicode_FromUnicode().
Py_ssize_t PyUnicode_GET_LENGTH(PyObject *o)
Return the length of the Unicode string, in code points. o has to be a Unicode object in the “canonical” represen-
tation (not checked).
New in version 3.3.
Py_UCS1* PyUnicode_1BYTE_DATA(PyObject *o)
Py_UCS2* PyUnicode_2BYTE_DATA(PyObject *o)
Py_UCS4* PyUnicode_4BYTE_DATA(PyObject *o)
Return a pointer to the canonical representation cast to UCS1, UCS2 or UCS4 integer types for direct char-
acter access. No checks are performed if the canonical representation has the correct character size; use
PyUnicode_KIND() to select the right macro. Make sure PyUnicode_READY() has been called before
accessing this.
New in version 3.3.
PyUnicode_WCHAR_KIND
PyUnicode_1BYTE_KIND
PyUnicode_2BYTE_KIND
PyUnicode_4BYTE_KIND
Return values of the PyUnicode_KIND() macro.
New in version 3.3.
Deprecated since version 3.10, will be removed in version 3.12: PyUnicode_WCHAR_KIND is deprecated.
unsigned int PyUnicode_KIND(PyObject *o)
Return one of the PyUnicode kind constants (see above) that indicate how many bytes per character this Unicode
object uses to store its data. o has to be a Unicode object in the “canonical” representation (not checked).
New in version 3.3.
void* PyUnicode_DATA(PyObject *o)
Return a void pointer to the raw Unicode buffer. o has to be a Unicode object in the “canonical” representation
(not checked).
New in version 3.3.
void PyUnicode_WRITE(int kind, void *data, Py_ssize_t index, Py_UCS4 value)
Write into a canonical representation data (as obtained with PyUnicode_DATA()). This macro does not do
any sanity checks and is intended for usage in loops. The caller should cache the kind value and data pointer as
obtained from other macro calls. index is the index in the string (starts at 0) and value is the new code point value
which should be written to that location.
New in version 3.3.
Py_UCS4 PyUnicode_READ(int kind, void *data, Py_ssize_t index)
Read a code point from a canonical representation data (as obtained with PyUnicode_DATA()). No checks or
ready calls are performed.
New in version 3.3.
Py_UCS4 PyUnicode_READ_CHAR(PyObject *o, Py_ssize_t index)
Read a character from a Unicode object o, which must be in the “canonical” representation. This is less efficient
than PyUnicode_READ() if you do multiple consecutive reads.
New in version 3.3.
PyUnicode_MAX_CHAR_VALUE(o)
Return the maximum code point that is suitable for creating another string based on o, which must be in the
“canonical” representation. This is always an approximation but more efficient than iterating over the string.
New in version 3.3.
Py_ssize_t PyUnicode_GET_SIZE(PyObject *o)
Return the size of the deprecated Py_UNICODE representation, in code units (this includes surrogate pairs as 2
units). o has to be a Unicode object (not checked).
Deprecated since version 3.3, will be removed in version 3.12: Part of the old-style Unicode API, please migrate
to using PyUnicode_GET_LENGTH().
Py_ssize_t PyUnicode_GET_DATA_SIZE(PyObject *o)
Return the size of the deprecated Py_UNICODE representation in bytes. o has to be a Unicode object (not checked).
Deprecated since version 3.3, will be removed in version 3.12: Part of the old-style Unicode API, please migrate
to using PyUnicode_GET_LENGTH().
Py_UNICODE* PyUnicode_AS_UNICODE(PyObject *o)
const char* PyUnicode_AS_DATA(PyObject *o)
Return a pointer to a Py_UNICODE representation of the object. The returned buffer is always terminated with an
extra null code point. It may also contain embedded null code points, which would cause the string to be truncated
when used in most C functions. The AS_DATA form casts the pointer to const char *. The o argument has
to be a Unicode object (not checked).
Changed in version 3.3: This macro is now inefficient – because in many cases the Py_UNICODE representation
does not exist and needs to be created – and can fail (return NULL with an exception set). Try to port the code to use
the new PyUnicode_nBYTE_DATA() macros or use PyUnicode_WRITE() or PyUnicode_READ().
Deprecated since version 3.3, will be removed in version 3.12: Part of the old-style Unicode API, please migrate
to using the PyUnicode_nBYTE_DATA() family of macros.
int PyUnicode_IsIdentifier(PyObject *o)
Return 1 if the string is a valid identifier according to the language definition, section identifiers. Return 0 otherwise.
Changed in version 3.9: The function does not call Py_FatalError() anymore if the string is not ready.
Unicode provides many different character properties. The most often needed ones are available through these macros
which are mapped to C functions depending on the Python configuration.
int Py_UNICODE_ISSPACE(Py_UNICODE ch)
Return 1 or 0 depending on whether ch is a whitespace character.
int Py_UNICODE_ISLOWER(Py_UNICODE ch)
Return 1 or 0 depending on whether ch is a lowercase character.
int Py_UNICODE_ISUPPER(Py_UNICODE ch)
Return 1 or 0 depending on whether ch is an uppercase character.
int Py_UNICODE_ISTITLE(Py_UNICODE ch)
Return 1 or 0 depending on whether ch is a titlecase character.
int Py_UNICODE_ISLINEBREAK(Py_UNICODE ch)
Return 1 or 0 depending on whether ch is a linebreak character.
int Py_UNICODE_ISDECIMAL(Py_UNICODE ch)
Return 1 or 0 depending on whether ch is a decimal character.
int Py_UNICODE_ISDIGIT(Py_UNICODE ch)
Return 1 or 0 depending on whether ch is a digit character.
int Py_UNICODE_ISNUMERIC(Py_UNICODE ch)
Return 1 or 0 depending on whether ch is a numeric character.
int Py_UNICODE_ISALPHA(Py_UNICODE ch)
Return 1 or 0 depending on whether ch is an alphabetic character.
int Py_UNICODE_ISALNUM(Py_UNICODE ch)
Return 1 or 0 depending on whether ch is an alphanumeric character.
int Py_UNICODE_ISPRINTABLE(Py_UNICODE ch)
Return 1 or 0 depending on whether ch is a printable character. Nonprintable characters are those characters
defined in the Unicode character database as “Other” or “Separator”, excepting the ASCII space (0x20) which is
considered printable. (Note that printable characters in this context are those which should not be escaped when
repr() is invoked on a string. It has no bearing on the handling of strings written to sys.stdout or sys.
stderr.)
These APIs can be used for fast direct character conversions:
Py_UNICODE Py_UNICODE_TOLOWER(Py_UNICODE ch)
Return the character ch converted to lower case.
Deprecated since version 3.3: This function uses simple case mappings.
Py_UNICODE Py_UNICODE_TOUPPER(Py_UNICODE ch)
Return the character ch converted to upper case.
Deprecated since version 3.3: This function uses simple case mappings.
Py_UNICODE Py_UNICODE_TOTITLE(Py_UNICODE ch)
Return the character ch converted to title case.
Deprecated since version 3.3: This function uses simple case mappings.
int Py_UNICODE_TODECIMAL(Py_UNICODE ch)
Return the character ch converted to a decimal positive integer. Return -1 if this is not possible. This macro does
not raise exceptions.
int Py_UNICODE_TODIGIT(Py_UNICODE ch)
Return the character ch converted to a single digit integer. Return -1 if this is not possible. This macro does not
raise exceptions.
double Py_UNICODE_TONUMERIC(Py_UNICODE ch)
Return the character ch converted to a double. Return -1.0 if this is not possible. This macro does not raise
exceptions.
These APIs can be used to work with surrogates:
Py_UNICODE_IS_SURROGATE(ch)
Check if ch is a surrogate (0xD800 <= ch <= 0xDFFF).
Py_UNICODE_IS_HIGH_SURROGATE(ch)
Check if ch is a high surrogate (0xD800 <= ch <= 0xDBFF).
Py_UNICODE_IS_LOW_SURROGATE(ch)
Check if ch is a low surrogate (0xDC00 <= ch <= 0xDFFF).
Py_UNICODE_JOIN_SURROGATES(high, low)
Join two surrogate characters and return a single Py_UCS4 value. high and low are respectively the leading and
trailing surrogates in a surrogate pair.
To create Unicode objects and access their basic sequence properties, use these APIs:
PyObject* PyUnicode_New(Py_ssize_t size, Py_UCS4 maxchar)
Return value: New reference. Create a new Unicode object. maxchar should be the true maximum code point to
be placed in the string. As an approximation, it can be rounded up to the nearest value in the sequence 127, 255,
65535, 1114111.
This is the recommended way to allocate a new Unicode object. Objects created using this function are not resizable.
New in version 3.3.
PyObject* PyUnicode_FromKindAndData(int kind, const void *buffer, Py_ssize_t size)
Return value: New reference. Create a new Unicode object with the given kind (possible values are
An unrecognized format character causes all the rest of the format string to be copied as-is to the result string, and
any extra arguments discarded.
1 For integer specifiers (d, u, ld, li, lu, lld, lli, llu, zd, zi, zu, i, x): the 0-conversion flag has effect even when a precision is given.
Note: The width formatter unit is number of characters rather than bytes. The precision formatter unit is number
of bytes for "%s" and "%V" (if the PyObject* argument is NULL), and a number of characters for "%A",
"%U", "%S", "%R" and "%V" (if the PyObject* argument is not NULL).
Locale Encoding
The current locale encoding can be used to decode text from the operating system.
PyObject* PyUnicode_DecodeLocaleAndSize(const char *str, Py_ssize_t len, const char *errors)
Return value: New reference. Decode a string from UTF-8 on Android and VxWorks, or from the current locale
encoding on other platforms. The supported error handlers are "strict" and "surrogateescape" (PEP
383). The decoder uses "strict" error handler if errors is NULL. str must end with a null character but cannot
contain embedded null characters.
Use PyUnicode_DecodeFSDefaultAndSize() to decode a string from
Py_FileSystemDefaultEncoding (the locale encoding read at Python startup).
This function ignores the Python UTF-8 mode.
See also:
The Py_DecodeLocale() function.
New in version 3.3.
Changed in version 3.7: The function now also uses the current locale encoding for the surrogateescape
error handler, except on Android. Previously, Py_DecodeLocale() was used for the surrogateescape,
and the current locale encoding was used for strict.
To encode and decode file names and other environment strings, Py_FileSystemDefaultEncoding should be
used as the encoding, and Py_FileSystemDefaultEncodeErrors should be used as the error handler (PEP
383 and PEP 529). To encode file names to bytes during argument parsing, the "O&" converter should be used,
passing PyUnicode_FSConverter() as the conversion function:
int PyUnicode_FSConverter(PyObject* obj, void* result)
ParseTuple converter: encode str objects – obtained directly or through the os.PathLike interface –
to bytes using PyUnicode_EncodeFSDefault(); bytes objects are output as-is. result must be a
PyBytesObject* which must be released when it is no longer used.
New in version 3.1.
Changed in version 3.6: Accepts a path-like object.
To decode file names to str during argument parsing, the "O&" converter should be used, passing
PyUnicode_FSDecoder() as the conversion function:
int PyUnicode_FSDecoder(PyObject* obj, void* result)
ParseTuple converter: decode bytes objects – obtained either directly or indirectly through the os.PathLike
interface – to str using PyUnicode_DecodeFSDefaultAndSize(); str objects are output as-is. result
must be a PyUnicodeObject* which must be released when it is no longer used.
New in version 3.2.
Changed in version 3.6: Accepts a path-like object.
PyObject* PyUnicode_DecodeFSDefaultAndSize(const char *s, Py_ssize_t size)
Return value: New reference. Decode a string using Py_FileSystemDefaultEncoding and the
Py_FileSystemDefaultEncodeErrors error handler.
If Py_FileSystemDefaultEncoding is not set, fall back to the locale encoding.
wchar_t Support
Returns a buffer allocated by PyMem_Alloc() (use PyMem_Free() to free it) on success. On error, returns
NULL and *size is undefined. Raises a MemoryError if memory allocation is failed.
New in version 3.2.
Changed in version 3.7: Raises a ValueError if size is NULL and the wchar_t* string contains null characters.
Built-in Codecs
Python provides a set of built-in codecs which are written in C for speed. All of these codecs are directly usable via the
following functions.
Many of the following APIs take two arguments encoding and errors, and they have the same semantics as the ones of
the built-in str() string object constructor.
Setting encoding to NULL causes the default encoding to be used which is UTF-8. The file sys-
tem calls should use PyUnicode_FSConverter() for encoding file names. This uses the variable
Py_FileSystemDefaultEncoding internally. This variable should be treated as read-only: on some systems,
it will be a pointer to a static string, on others, it will change at run-time (such as when the application invokes setlocale).
Error handling is set by errors which may also be set to NULL meaning to use the default handling defined for the codec.
Default error handling for all built-in codecs is “strict” (ValueError is raised).
The codecs all use a similar interface. Only deviation from the following generic ones are documented for simplicity.
Generic Codecs
UTF-8 Codecs
UTF-32 Codecs
If *byteorder is zero, and the first four bytes of the input data are a byte order mark (BOM), the decoder
switches to this byte order and the BOM is not copied into the resulting Unicode string. If *byteorder is -1 or
1, any byte order mark is copied to the output.
After completion, *byteorder is set to the current byte order at the end of input data.
If byteorder is NULL, the codec starts in native order mode.
Return NULL if an exception was raised by the codec.
PyObject* PyUnicode_DecodeUTF32Stateful(const char *s, Py_ssize_t size, const char *errors, int *by-
teorder, Py_ssize_t *consumed)
Return value: New reference. If consumed is NULL, behave like PyUnicode_DecodeUTF32(). If consumed
is not NULL, PyUnicode_DecodeUTF32Stateful() will not treat trailing incomplete UTF-32 byte se-
quences (such as a number of bytes not divisible by four) as an error. Those bytes will not be decoded and the
number of bytes that have been decoded will be stored in consumed.
PyObject* PyUnicode_AsUTF32String(PyObject *unicode)
Return value: New reference. Return a Python byte string using the UTF-32 encoding in native byte order. The
string always starts with a BOM mark. Error handling is “strict”. Return NULL if an exception was raised by the
codec.
PyObject* PyUnicode_EncodeUTF32(const Py_UNICODE *s, Py_ssize_t size, const char *errors, int byte-
order)
Return value: New reference. Return a Python bytes object holding the UTF-32 encoded value of the Unicode data
in s. Output is written according to the following byte order:
If byteorder is 0, the output string will always start with the Unicode BOM mark (U+FEFF). In the other two
modes, no BOM mark is prepended.
If Py_UNICODE_WIDE is not defined, surrogate pairs will be output as a single code point.
Return NULL if an exception was raised by the codec.
Deprecated since version 3.3, will be removed in version 3.11: Part of the old-style Py_UNICODE API; please
migrate to using PyUnicode_AsUTF32String() or PyUnicode_AsEncodedString().
UTF-16 Codecs
If *byteorder is zero, and the first two bytes of the input data are a byte order mark (BOM), the decoder
switches to this byte order and the BOM is not copied into the resulting Unicode string. If *byteorder is -1 or
1, any byte order mark is copied to the output (where it will result in either a \ufeff or a \ufffe character).
After completion, *byteorder is set to the current byte order at the end of input data.
If byteorder is 0, the output string will always start with the Unicode BOM mark (U+FEFF). In the other two
modes, no BOM mark is prepended.
If Py_UNICODE_WIDE is defined, a single Py_UNICODE value may get represented as a surrogate pair. If it is
not defined, each Py_UNICODE values is interpreted as a UCS-2 character.
Return NULL if an exception was raised by the codec.
Deprecated since version 3.3, will be removed in version 3.11: Part of the old-style Py_UNICODE API; please
migrate to using PyUnicode_AsUTF16String() or PyUnicode_AsEncodedString().
UTF-7 Codecs
Deprecated since version 3.3, will be removed in version 3.11: Part of the old-style Py_UNICODE API; please
migrate to using PyUnicode_AsEncodedString().
Unicode-Escape Codecs
Raw-Unicode-Escape Codecs
Latin-1 Codecs
These are the Latin-1 codec APIs: Latin-1 corresponds to the first 256 Unicode ordinals and only these are accepted by
the codecs during encoding.
PyObject* PyUnicode_DecodeLatin1(const char *s, Py_ssize_t size, const char *errors)
Return value: New reference. Create a Unicode object by decoding size bytes of the Latin-1 encoded string s. Return
NULL if an exception was raised by the codec.
PyObject* PyUnicode_AsLatin1String(PyObject *unicode)
Return value: New reference. Encode a Unicode object using Latin-1 and return the result as Python bytes object.
Error handling is “strict”. Return NULL if an exception was raised by the codec.
ASCII Codecs
These are the ASCII codec APIs. Only 7-bit ASCII data is accepted. All other codes generate errors.
PyObject* PyUnicode_DecodeASCII(const char *s, Py_ssize_t size, const char *errors)
Return value: New reference. Create a Unicode object by decoding size bytes of the ASCII encoded string s. Return
NULL if an exception was raised by the codec.
PyObject* PyUnicode_AsASCIIString(PyObject *unicode)
Return value: New reference. Encode a Unicode object using ASCII and return the result as Python bytes object.
Error handling is “strict”. Return NULL if an exception was raised by the codec.
PyObject* PyUnicode_EncodeASCII(const Py_UNICODE *s, Py_ssize_t size, const char *errors)
Return value: New reference. Encode the Py_UNICODE buffer of the given size using ASCII and return a Python
bytes object. Return NULL if an exception was raised by the codec.
Deprecated since version 3.3, will be removed in version 3.11: Part of the old-style Py_UNICODE API; please
migrate to using PyUnicode_AsASCIIString() or PyUnicode_AsEncodedString().
This codec is special in that it can be used to implement many different codecs (and this is in fact what was done to
obtain most of the standard codecs included in the encodings package). The codec uses mapping to encode and
decode characters. The mapping objects provided must support the __getitem__() mapping interface; dictionaries
and sequences work well.
These are the mapping codec APIs:
PyObject* PyUnicode_DecodeCharmap(const char *data, Py_ssize_t size, PyObject *mapping, const
char *errors)
Return value: New reference. Create a Unicode object by decoding size bytes of the encoded string s using the given
mapping object. Return NULL if an exception was raised by the codec.
If mapping is NULL, Latin-1 decoding will be applied. Else mapping must map bytes ordinals (integers in the range
from 0 to 255) to Unicode strings, integers (which are then interpreted as Unicode ordinals) or None. Unmapped
data bytes – ones which cause a LookupError, as well as ones which get mapped to None, 0xFFFE or '\
ufffe', are treated as undefined mappings and cause an error.
PyObject* PyUnicode_AsCharmapString(PyObject *unicode, PyObject *mapping)
Return value: New reference. Encode a Unicode object using the given mapping object and return the result as a
bytes object. Error handling is “strict”. Return NULL if an exception was raised by the codec.
The mapping object must map Unicode ordinal integers to bytes objects, integers in the range from 0 to 255 or
None. Unmapped character ordinals (ones which cause a LookupError) as well as mapped to None are treated
as “undefined mapping” and cause an error.
PyObject* PyUnicode_EncodeCharmap(const Py_UNICODE *s, Py_ssize_t size, PyObject *mapping, const
char *errors)
Return value: New reference. Encode the Py_UNICODE buffer of the given size using the given mapping object
and return the result as a bytes object. Return NULL if an exception was raised by the codec.
Deprecated since version 3.3, will be removed in version 3.11: Part of the old-style Py_UNICODE API; please
migrate to using PyUnicode_AsCharmapString() or PyUnicode_AsEncodedString().
The following codec API is special in that maps Unicode to Unicode.
PyObject* PyUnicode_Translate(PyObject *str, PyObject *table, const char *errors)
Return value: New reference. Translate a string by applying a character mapping table to it and return the resulting
Unicode object. Return NULL if an exception was raised by the codec.
The mapping table must map Unicode ordinal integers to Unicode ordinal integers or None (causing deletion of
the character).
Mapping tables need only provide the __getitem__() interface; dictionaries and sequences work well. Un-
mapped character ordinals (ones which cause a LookupError) are left untouched and are copied as-is.
errors has the usual meaning for codecs. It may be NULL which indicates to use the default error handling.
PyObject* PyUnicode_TranslateCharmap(const Py_UNICODE *s, Py_ssize_t size, PyObject *mapping,
const char *errors)
Return value: New reference. Translate a Py_UNICODE buffer of the given size by applying a character mapping
table to it and return the resulting Unicode object. Return NULL when an exception was raised by the codec.
Deprecated since version 3.3, will be removed in version 3.11: Part of the old-style Py_UNICODE API; please
migrate to using PyUnicode_Translate(). or generic codec based API
These are the MBCS codec APIs. They are currently only available on Windows and use the Win32 MBCS converters
to implement the conversions. Note that MBCS (or DBCS) is a class of encodings, not just one. The target encoding is
defined by the user settings on the machine running the codec.
PyObject* PyUnicode_DecodeMBCS(const char *s, Py_ssize_t size, const char *errors)
Return value: New reference. Create a Unicode object by decoding size bytes of the MBCS encoded string s. Return
NULL if an exception was raised by the codec.
PyObject* PyUnicode_DecodeMBCSStateful(const char *s, Py_ssize_t size, const char *errors,
Py_ssize_t *consumed)
Return value: New reference. If consumed is NULL, behave like PyUnicode_DecodeMBCS(). If consumed
is not NULL, PyUnicode_DecodeMBCSStateful() will not decode trailing lead byte and the number of
bytes that have been decoded will be stored in consumed.
PyObject* PyUnicode_AsMBCSString(PyObject *unicode)
Return value: New reference. Encode a Unicode object using MBCS and return the result as Python bytes object.
Error handling is “strict”. Return NULL if an exception was raised by the codec.
PyObject* PyUnicode_EncodeCodePage(int code_page, PyObject *unicode, const char *errors)
Return value: New reference. Encode the Unicode object using the specified code page and return a Python bytes
object. Return NULL if an exception was raised by the codec. Use CP_ACP code page to get the MBCS encoder.
New in version 3.3.
PyObject* PyUnicode_EncodeMBCS(const Py_UNICODE *s, Py_ssize_t size, const char *errors)
Return value: New reference. Encode the Py_UNICODE buffer of the given size using MBCS and return a Python
bytes object. Return NULL if an exception was raised by the codec.
Deprecated since version 3.3, will be removed in version 4.0: Part of the old-style Py_UNICODE
API; please migrate to using PyUnicode_AsMBCSString(), PyUnicode_EncodeCodePage() or
PyUnicode_AsEncodedString().
The following APIs are capable of handling Unicode objects and strings on input (we refer to them as strings in the
descriptions) and return Unicode objects or integers as appropriate.
They all return NULL or -1 if an exception occurs.
PyObject* PyUnicode_Concat(PyObject *left, PyObject *right)
Return value: New reference. Concat two strings giving a new Unicode string.
PyObject* PyUnicode_Split(PyObject *s, PyObject *sep, Py_ssize_t maxsplit)
Return value: New reference. Split a string giving a list of Unicode strings. If sep is NULL, splitting will be done
at all whitespace substrings. Otherwise, splits occur at the given separator. At most maxsplit splits will be done. If
negative, no limit is set. Separators are not included in the resulting list.
PyObject* PyUnicode_Splitlines(PyObject *s, int keepend)
Return value: New reference. Split a Unicode string at line breaks, returning a list of Unicode strings. CRLF is
considered to be one line break. If keepend is 0, the Line break characters are not included in the resulting strings.
PyObject* PyUnicode_Join(PyObject *separator, PyObject *seq)
Return value: New reference. Join a sequence of strings using the given separator and return the resulting Unicode
string.
Py_ssize_t PyUnicode_Tailmatch(PyObject *str, PyObject *substr, Py_ssize_t start, Py_ssize_t end, int di-
rection)
Return 1 if substr matches str[start:end] at the given tail end (direction == -1 means to do a prefix match,
direction == 1 a suffix match), 0 otherwise. Return -1 if an error occurred.
Py_ssize_t PyUnicode_Find(PyObject *str, PyObject *substr, Py_ssize_t start, Py_ssize_t end, int direction)
Return the first position of substr in str[start:end] using the given direction (direction == 1 means to do a
forward search, direction == -1 a backward search). The return value is the index of the first match; a value of -1
indicates that no match was found, and -2 indicates that an error occurred and an exception has been set.
Py_ssize_t PyUnicode_FindChar(PyObject *str, Py_UCS4 ch, Py_ssize_t start, Py_ssize_t end, int direction)
Return the first position of the character ch in str[start:end] using the given direction (direction == 1 means
to do a forward search, direction == -1 a backward search). The return value is the index of the first match; a value
of -1 indicates that no match was found, and -2 indicates that an error occurred and an exception has been set.
New in version 3.3.
Changed in version 3.7: start and end are now adjusted to behave like str[start:end].
Py_ssize_t PyUnicode_Count(PyObject *str, PyObject *substr, Py_ssize_t start, Py_ssize_t end)
Return the number of non-overlapping occurrences of substr in str[start:end]. Return -1 if an error oc-
curred.
PyObject* PyUnicode_Replace(PyObject *str, PyObject *substr, PyObject *replstr, Py_ssize_t maxcount)
Return value: New reference. Replace at most maxcount occurrences of substr in str with replstr and return the
resulting Unicode object. maxcount == -1 means replace all occurrences.
int PyUnicode_Compare(PyObject *left, PyObject *right)
Compare two strings and return -1, 0, 1 for less than, equal, and greater than, respectively.
This function returns -1 upon failure, so one should call PyErr_Occurred() to check for errors.
int PyUnicode_CompareWithASCIIString(PyObject *uni, const char *string)
Compare a Unicode object, uni, with string and return -1, 0, 1 for less than, equal, and greater than, respectively.
It is best to pass only ASCII-encoded strings, but the function interprets the input string as ISO-8859-1 if it contains
non-ASCII characters.
PyTupleObject
This subtype of PyObject represents a Python tuple object.
PyTypeObject PyTuple_Type
This instance of PyTypeObject represents the Python tuple type; it is the same object as tuple in the Python
layer.
int PyTuple_Check(PyObject *p)
Return true if p is a tuple object or an instance of a subtype of the tuple type. This function always succeeds.
int PyTuple_CheckExact(PyObject *p)
Return true if p is a tuple object, but not an instance of a subtype of the tuple type. This function always succeeds.
PyObject* PyTuple_New(Py_ssize_t len)
Return value: New reference. Return a new tuple object of size len, or NULL on failure.
PyObject* PyTuple_Pack(Py_ssize_t n, ...)
Return value: New reference. Return a new tuple object of size n, or NULL on failure. The tuple values are initialized
to the subsequent n C arguments pointing to Python objects. PyTuple_Pack(2, a, b) is equivalent to
Py_BuildValue("(OO)", a, b).
Py_ssize_t PyTuple_Size(PyObject *p)
Take a pointer to a tuple object, and return the size of that tuple.
Note: This function “steals” a reference to o and discards a reference to an item already in the tuple at the affected
position.
Note: This macro “steals” a reference to o, and, unlike PyTuple_SetItem(), does not discard a reference to
any item that is being replaced; any reference in the tuple at position pos will be leaked.
Struct sequence objects are the C equivalent of namedtuple() objects, i.e. a sequence whose items can also be
accessed through attributes. To create a struct sequence, you first have to create a specific struct sequence type.
PyTypeObject* PyStructSequence_NewType(PyStructSequence_Desc *desc)
Return value: New reference. Create a new struct sequence type from the data in desc, described below. Instances
of the resulting type can be created with PyStructSequence_New().
void PyStructSequence_InitType(PyTypeObject *type, PyStructSequence_Desc *desc)
Initializes a struct sequence type type from desc in place.
int PyStructSequence_InitType2(PyTypeObject *type, PyStructSequence_Desc *desc)
The same as PyStructSequence_InitType, but returns 0 on success and -1 on failure.
New in version 3.4.
PyStructSequence_Desc
Contains the meta information of a struct sequence type to create.
PyStructSequence_Field
Describes a field of a struct sequence. As a struct sequence is modeled as a tuple, all fields are typed as
PyObject*. The index in the fields array of the PyStructSequence_Desc determines which field
of the struct sequence is described.
PyListObject
This subtype of PyObject represents a Python list object.
PyTypeObject PyList_Type
This instance of PyTypeObject represents the Python list type. This is the same object as list in the Python
layer.
int PyList_Check(PyObject *p)
Return true if p is a list object or an instance of a subtype of the list type. This function always succeeds.
int PyList_CheckExact(PyObject *p)
Return true if p is a list object, but not an instance of a subtype of the list type. This function always succeeds.
PyObject* PyList_New(Py_ssize_t len)
Return value: New reference. Return a new list of length len on success, or NULL on failure.
Note: If len is greater than zero, the returned list object’s items are set to NULL. Thus you cannot use abstract
API functions such as PySequence_SetItem() or expose the object to Python code before setting all items
to a real object with PyList_SetItem().
Note: This function “steals” a reference to item and discards a reference to an item already in the list at the affected
position.
Note: This macro “steals” a reference to item, and, unlike PyList_SetItem(), does not discard a reference
to any item that is being replaced; any reference in list at position i will be leaked.
PyDictObject
This subtype of PyObject represents a Python dictionary object.
PyTypeObject PyDict_Type
This instance of PyTypeObject represents the Python dictionary type. This is the same object as dict in the
Python layer.
int PyDict_Check(PyObject *p)
Return true if p is a dict object or an instance of a subtype of the dict type. This function always succeeds.
int PyDict_CheckExact(PyObject *p)
Return true if p is a dict object, but not an instance of a subtype of the dict type. This function always succeeds.
PyObject* PyDict_New()
Return value: New reference. Return a new empty dictionary, or NULL on failure.
PyObject* PyDictProxy_New(PyObject *mapping)
Return value: New reference. Return a types.MappingProxyType object for a mapping which enforces read-
only behavior. This is normally used to create a view to prevent modification of the dictionary for non-dynamic
class types.
void PyDict_Clear(PyObject *p)
Empty an existing dictionary of all key-value pairs.
int PyDict_Contains(PyObject *p, PyObject *key)
Determine if dictionary p contains key. If an item in p is matches key, return 1, otherwise return 0. On error,
return -1. This is equivalent to the Python expression key in p.
PyObject* PyDict_Copy(PyObject *p)
Return value: New reference. Return a new dictionary that contains the same key-value pairs as p.
through them are borrowed. ppos should not be altered during iteration. Its value represents offsets within the
internal dictionary structure, and since the structure is sparse, the offsets are not consecutive.
For example:
The dictionary p should not be mutated during iteration. It is safe to modify the values of the keys as you iterate
over the dictionary, but only so long as the set of keys does not change. For example:
This section details the public API for set and frozenset objects. Any functionality not listed be-
low is best accessed using the either the abstract object protocol (including PyObject_CallMethod(),
PyObject_RichCompareBool(), PyObject_Hash(), PyObject_Repr(), PyObject_IsTrue(),
PyObject_Print(), and PyObject_GetIter()) or the abstract number protocol (includ-
ing PyNumber_And(), PyNumber_Subtract(), PyNumber_Or(), PyNumber_Xor(),
PyNumber_InPlaceAnd(), PyNumber_InPlaceSubtract(), PyNumber_InPlaceOr(), and
PyNumber_InPlaceXor()).
PySetObject
This subtype of PyObject is used to hold the internal data for both set and frozenset objects. It is like
a PyDictObject in that it is a fixed size for small sets (much like tuple storage) and will point to a separate,
variable sized block of memory for medium and large sized sets (much like list storage). None of the fields of this
structure should be considered public and are subject to change. All access should be done through the documented
API rather than by manipulating the values in the structure.
PyTypeObject PySet_Type
This is an instance of PyTypeObject representing the Python set type.
PyTypeObject PyFrozenSet_Type
This is an instance of PyTypeObject representing the Python frozenset type.
The following type check macros work on pointers to any Python object. Likewise, the constructor functions work with
any iterable Python object.
int PySet_Check(PyObject *p)
Return true if p is a set object or an instance of a subtype. This function always succeeds.
int PyFrozenSet_Check(PyObject *p)
Return true if p is a frozenset object or an instance of a subtype. This function always succeeds.
int PyAnySet_Check(PyObject *p)
Return true if p is a set object, a frozenset object, or an instance of a subtype. This function always succeeds.
int PyAnySet_CheckExact(PyObject *p)
Return true if p is a set object or a frozenset object but not an instance of a subtype. This function always
succeeds.
int PyFrozenSet_CheckExact(PyObject *p)
Return true if p is a frozenset object but not an instance of a subtype. This function always succeeds.
PyObject* PySet_New(PyObject *iterable)
Return value: New reference. Return a new set containing objects returned by the iterable. The iterable may be
NULL to create a new empty set. Return the new set on success or NULL on failure. Raise TypeError if iterable
is not actually iterable. The constructor is also useful for copying a set (c=set(s)).
PyObject* PyFrozenSet_New(PyObject *iterable)
Return value: New reference. Return a new frozenset containing objects returned by the iterable. The iter-
able may be NULL to create a new empty frozenset. Return the new set on success or NULL on failure. Raise
TypeError if iterable is not actually iterable.
The following functions and macros are available for instances of set or frozenset or instances of their subtypes.
Py_ssize_t PySet_Size(PyObject *anyset)
Return the length of a set or frozenset object. Equivalent to len(anyset). Raises a
PyExc_SystemError if anyset is not a set, frozenset, or an instance of a subtype.
Py_ssize_t PySet_GET_SIZE(PyObject *anyset)
Macro form of PySet_Size() without error checking.
__qualname__ attribute. qualname should be a unicode object or NULL; if NULL, the __qualname__
attribute is set to the same value as its __name__ attribute.
New in version 3.3.
PyObject* PyFunction_GetCode(PyObject *op)
Return value: Borrowed reference. Return the code object associated with the function object op.
PyObject* PyFunction_GetGlobals(PyObject *op)
Return value: Borrowed reference. Return the globals dictionary associated with the function object op.
PyObject* PyFunction_GetModule(PyObject *op)
Return value: Borrowed reference. Return the __module__ attribute of the function object op. This is normally a
string containing the module name, but can be set to any other object by Python code.
PyObject* PyFunction_GetDefaults(PyObject *op)
Return value: Borrowed reference. Return the argument default values of the function object op. This can be a
tuple of arguments or NULL.
int PyFunction_SetDefaults(PyObject *op, PyObject *defaults)
Set the argument default values for the function object op. defaults must be Py_None or a tuple.
Raises SystemError and returns -1 on failure.
PyObject* PyFunction_GetClosure(PyObject *op)
Return value: Borrowed reference. Return the closure associated with the function object op. This can be NULL or
a tuple of cell objects.
int PyFunction_SetClosure(PyObject *op, PyObject *closure)
Set the closure associated with the function object op. closure must be Py_None or a tuple of cell objects.
Raises SystemError and returns -1 on failure.
PyObject *PyFunction_GetAnnotations(PyObject *op)
Return value: Borrowed reference. Return the annotations of the function object op. This can be a mutable dictio-
nary or NULL.
int PyFunction_SetAnnotations(PyObject *op, PyObject *annotations)
Set the annotations for the function object op. annotations must be a dictionary or Py_None.
Raises SystemError and returns -1 on failure.
An instance method is a wrapper for a PyCFunction and the new way to bind a PyCFunction to a class object. It
replaces the former call PyMethod_New(func, NULL, class).
PyTypeObject PyInstanceMethod_Type
This instance of PyTypeObject represents the Python instance method type. It is not exposed to Python pro-
grams.
int PyInstanceMethod_Check(PyObject *o)
Return true if o is an instance method object (has type PyInstanceMethod_Type). The parameter must not
be NULL. This function always succeeds.
PyObject* PyInstanceMethod_New(PyObject *func)
Return value: New reference. Return a new instance method object, with func being any callable object func is the
function that will be called when the instance method is called.
PyObject* PyInstanceMethod_Function(PyObject *im)
Return value: Borrowed reference. Return the function object associated with the instance method im.
Methods are bound function objects. Methods are always bound to an instance of a user-defined class. Unbound methods
(methods bound to a class object) are no longer available.
PyTypeObject PyMethod_Type
This instance of PyTypeObject represents the Python method type. This is exposed to Python programs as
types.MethodType.
int PyMethod_Check(PyObject *o)
Return true if o is a method object (has type PyMethod_Type). The parameter must not be NULL. This function
always succeeds.
PyObject* PyMethod_New(PyObject *func, PyObject *self)
Return value: New reference. Return a new method object, with func being any callable object and self the instance
the method should be bound. func is the function that will be called when the method is called. self must not be
NULL.
PyObject* PyMethod_Function(PyObject *meth)
Return value: Borrowed reference. Return the function object associated with the method meth.
PyObject* PyMethod_GET_FUNCTION(PyObject *meth)
Return value: Borrowed reference. Macro version of PyMethod_Function() which avoids error checking.
PyObject* PyMethod_Self(PyObject *meth)
Return value: Borrowed reference. Return the instance associated with the method meth.
PyObject* PyMethod_GET_SELF(PyObject *meth)
Return value: Borrowed reference. Macro version of PyMethod_Self() which avoids error checking.
“Cell” objects are used to implement variables referenced by multiple scopes. For each such variable, a cell object is
created to store the value; the local variables of each stack frame that references the value contains a reference to the cells
from outer scopes which also use that variable. When the value is accessed, the value contained in the cell is used instead
of the cell object itself. This de-referencing of the cell object requires support from the generated byte-code; these are
not automatically de-referenced when accessed. Cell objects are not likely to be useful elsewhere.
PyCellObject
The C structure used for cell objects.
PyTypeObject PyCell_Type
The type object corresponding to cell objects.
int PyCell_Check(ob)
Return true if ob is a cell object; ob must not be NULL. This function always succeeds.
PyObject* PyCell_New(PyObject *ob)
Return value: New reference. Create and return a new cell object containing the value ob. The parameter may be
NULL.
PyObject* PyCell_Get(PyObject *cell)
Return value: New reference. Return the contents of the cell cell.
Code objects are a low-level detail of the CPython implementation. Each one represents a chunk of executable code that
hasn’t yet been bound into a function.
PyCodeObject
The C structure of the objects used to describe code objects. The fields of this type are subject to change at any
time.
PyTypeObject PyCode_Type
This is an instance of PyTypeObject representing the Python code type.
int PyCode_Check(PyObject *co)
Return true if co is a code object. This function always succeeds.
int PyCode_GetNumFree(PyCodeObject *co)
Return the number of free variables in co.
PyCodeObject* PyCode_New(int argcount, int kwonlyargcount, int nlocals, int stacksize, int flags, PyOb-
ject *code, PyObject *consts, PyObject *names, PyObject *varnames, PyOb-
ject *freevars, PyObject *cellvars, PyObject *filename, PyObject *name, int first-
lineno, PyObject *lnotab)
Return value: New reference. Return a new code object. If you need a dummy code object to create a frame, use
PyCode_NewEmpty() instead. Calling PyCode_New() directly can bind you to a precise Python version
since the definition of the bytecode changes often.
PyCodeObject* PyCode_NewWithPosOnlyArgs(int argcount, int posonlyargcount, int kwonlyargcount,
int nlocals, int stacksize, int flags, PyObject *code, PyOb-
ject *consts, PyObject *names, PyObject *varnames, Py-
Object *freevars, PyObject *cellvars, PyObject *filename,
PyObject *name, int firstlineno, PyObject *lnotab)
Return value: New reference. Similar to PyCode_New(), but with an extra “posonlyargcount” for positional-only
arguments.
New in version 3.8.
PyCodeObject* PyCode_NewEmpty(const char *filename, const char *funcname, int firstlineno)
Return value: New reference. Return a new empty code object with the specified filename, function name, and first
line number. It is illegal to exec() or eval() the resulting code object.
These APIs are a minimal emulation of the Python 2 C API for built-in file objects, which used to rely on the buffered
I/O (FILE*) support from the C standard library. In Python 3, files and streams use the new io module, which defines
several layers over the low-level unbuffered I/O of the operating system. The functions described below are convenience
C wrappers over these new APIs, and meant mostly for internal error reporting in the interpreter; third-party code is
advised to access the io APIs instead.
PyObject* PyFile_FromFd(int fd, const char *name, const char *mode, int buffering, const char *encoding,
const char *errors, const char *newline, int closefd)
Return value: New reference. Create a Python file object from the file descriptor of an already opened file fd. The
arguments name, encoding, errors and newline can be NULL to use the defaults; buffering can be -1 to use the
default. name is ignored and kept for backward compatibility. Return NULL on failure. For a more comprehensive
description of the arguments, please refer to the io.open() function documentation.
Warning: Since Python streams have their own buffering layer, mixing them with OS-level file descriptors
can produce various issues (such as unexpected ordering of data).
object is written instead of the repr(). Return 0 on success or -1 on failure; the appropriate exception will be
set.
int PyFile_WriteString(const char *s, PyObject *p)
Write string s to file object p. Return 0 on success or -1 on failure; the appropriate exception will be set.
PyTypeObject PyModule_Type
This instance of PyTypeObject represents the Python module type. This is exposed to Python programs as
types.ModuleType.
int PyModule_Check(PyObject *p)
Return true if p is a module object, or a subtype of a module object. This function always succeeds.
int PyModule_CheckExact(PyObject *p)
Return true if p is a module object, but not a subtype of PyModule_Type. This function always succeeds.
PyObject* PyModule_NewObject(PyObject *name)
Return value: New reference. Return a new module object with the __name__ attribute set to name. The module’s
__name__, __doc__, __package__, and __loader__ attributes are filled in (all but __name__ are set
to None); the caller is responsible for providing a __file__ attribute.
New in version 3.3.
Changed in version 3.4: __package__ and __loader__ are set to None.
PyObject* PyModule_New(const char *name)
Return value: New reference. Similar to PyModule_NewObject(), but the name is a UTF-8 encoded string
instead of a Unicode object.
PyObject* PyModule_GetDict(PyObject *module)
Return value: Borrowed reference. Return the dictionary object that implements module’s namespace; this object
is the same as the __dict__ attribute of the module object. If module is not a module object (or a subtype of a
module object), SystemError is raised and NULL is returned.
It is recommended extensions use other PyModule_*() and PyObject_*() functions rather than directly
manipulate a module’s __dict__.
PyObject* PyModule_GetNameObject(PyObject *module)
Return value: New reference. Return module’s __name__ value. If the module does not provide one, or if it is
not a string, SystemError is raised and NULL is returned.
New in version 3.3.
const char* PyModule_GetName(PyObject *module)
Similar to PyModule_GetNameObject() but return the name encoded to 'utf-8'.
void* PyModule_GetState(PyObject *module)
Return the “state” of the module, that is, a pointer to the block of memory allocated at module creation time, or
NULL. See PyModuleDef.m_size.
PyModuleDef* PyModule_GetDef(PyObject *module)
Return a pointer to the PyModuleDef struct from which the module was created, or NULL if the module wasn’t
created from a definition.
PyObject* PyModule_GetFilenameObject(PyObject *module)
Return value: New reference. Return the name of the file from which module was loaded using module’s __file__
attribute. If this is not defined, or if it is not a unicode string, raise SystemError and return NULL; otherwise
return a reference to a Unicode object.
Initializing C modules
Modules objects are usually created from extension modules (shared libraries which export an initialization function), or
compiled-in modules (where the initialization function is added using PyImport_AppendInittab()). See building
or extending-with-embedding for details.
The initialization function can either pass a module definition instance to PyModule_Create(), and return the re-
sulting module object, or request “multi-phase initialization” by returning the definition struct itself.
PyModuleDef
The module definition struct, which holds all information needed to create a module object. There is usually only
one statically initialized variable of this type for each module.
PyModuleDef_Base m_base
Always initialize this member to PyModuleDef_HEAD_INIT.
const char *m_name
Name for the new module.
const char *m_doc
Docstring for the module; usually a docstring variable created with PyDoc_STRVAR is used.
Py_ssize_t m_size
Module state may be kept in a per-module memory area that can be retrieved with
PyModule_GetState(), rather than in static globals. This makes modules safe for use in multi-
ple sub-interpreters.
This memory area is allocated based on m_size on module creation, and freed when the module object is
deallocated, after the m_free function has been called, if present.
Setting m_size to -1 means that the module does not support sub-interpreters, because it has global state.
Setting it to a non-negative value means that the module can be re-initialized and specifies the additional
amount of memory it requires for its state. Non-negative m_size is required for multi-phase initialization.
See PEP 3121 for more details.
PyMethodDef* m_methods
A pointer to a table of module-level functions, described by PyMethodDef values. Can be NULL if no
functions are present.
PyModuleDef_Slot* m_slots
An array of slot definitions for multi-phase initialization, terminated by a {0, NULL} entry. When using
single-phase initialization, m_slots must be NULL.
Changed in version 3.5: Prior to version 3.5, this member was always set to NULL, and was defined as:
inquiry m_reload
traverseproc m_traverse
A traversal function to call during GC traversal of the module object, or NULL if not needed.
This function is not called if the module state was requested but is not allocated yet. This is the case imme-
diately after the module is created and before the module is executed (Py_mod_exec function). More
precisely, this function is not called if m_size is greater than 0 and the module state (as returned by
PyModule_GetState()) is NULL.
Changed in version 3.9: No longer called before the module state is allocated.
inquiry m_clear
A clear function to call during GC clearing of the module object, or NULL if not needed.
This function is not called if the module state was requested but is not allocated yet. This is the case imme-
diately after the module is created and before the module is executed (Py_mod_exec function). More
precisely, this function is not called if m_size is greater than 0 and the module state (as returned by
PyModule_GetState()) is NULL.
Like PyTypeObject.tp_clear, this function is not always called before a module is deallocated. For
example, when reference counting is enough to determine that an object is no longer used, the cyclic garbage
collector is not involved and m_free is called directly.
Changed in version 3.9: No longer called before the module state is allocated.
freefunc m_free
A function to call during deallocation of the module object, or NULL if not needed.
This function is not called if the module state was requested but is not allocated yet. This is the case imme-
diately after the module is created and before the module is executed (Py_mod_exec function). More
precisely, this function is not called if m_size is greater than 0 and the module state (as returned by
PyModule_GetState()) is NULL.
Changed in version 3.9: No longer called before the module state is allocated.
Single-phase initialization
The module initialization function may create and return the module object directly. This is referred to as “single-phase
initialization”, and uses one of the following two module creation functions:
PyObject* PyModule_Create(PyModuleDef *def)
Return value: New reference. Create a new module object, given the definition in def. This behaves like
PyModule_Create2() with module_api_version set to PYTHON_API_VERSION.
PyObject* PyModule_Create2(PyModuleDef *def, int module_api_version)
Return value: New reference. Create a new module object, given the definition in def, assuming the API version
module_api_version. If that version does not match the version of the running interpreter, a RuntimeWarning
is emitted.
Note: Most uses of this function should be using PyModule_Create() instead; only use this if you are sure
you need it.
Before it is returned from in the initialization function, the resulting module object is typically populated using functions
like PyModule_AddObject().
Multi-phase initialization
An alternate way to specify extensions is to request “multi-phase initialization”. Extension modules created this way behave
more like Python modules: the initialization is split between the creation phase, when the module object is created, and
the execution phase, when it is populated. The distinction is similar to the __new__() and __init__() methods of
classes.
Unlike modules created using single-phase initialization, these modules are not singletons: if the sys.modules entry is
removed and the module is re-imported, a new module object is created, and the old module is subject to normal garbage
collection – as with Python modules. By default, multiple modules created from the same definition should be independent:
changes to one should not affect the others. This means that all state should be specific to the module object (using e.g.
using PyModule_GetState()), or its contents (such as the module’s __dict__ or individual classes created with
PyType_FromSpec()).
All modules created using multi-phase initialization are expected to support sub-interpreters. Making sure multiple mod-
ules are independent is typically enough to achieve this.
To request multi-phase initialization, the initialization function (PyInit_modulename) returns a PyModuleDef instance
with non-empty m_slots. Before it is returned, the PyModuleDef instance must be initialized with the following
function:
PyObject* PyModuleDef_Init(PyModuleDef *def)
Return value: Borrowed reference. Ensures a module definition is a properly initialized Python object that correctly
reports its type and reference count.
Returns def cast to PyObject*, or NULL if an error occurred.
New in version 3.5.
The m_slots member of the module definition must point to an array of PyModuleDef_Slot structures:
PyModuleDef_Slot
int slot
A slot ID, chosen from the available values explained below.
void* value
Value of the slot, whose meaning depends on the slot ID.
New in version 3.5.
The m_slots array must be terminated by a slot with id 0.
The available slot types are:
Py_mod_create
Specifies a function that is called to create the module object itself. The value pointer of this slot must point to a
function of the signature:
PyObject* create_module(PyObject *spec, PyModuleDef *def)
The function receives a ModuleSpec instance, as defined in PEP 451, and the module definition. It should return
a new module object, or set an error and return NULL.
This function should be kept minimal. In particular, it should not call arbitrary Python code, as trying to import
the same module again may result in an infinite loop.
Multiple Py_mod_create slots may not be specified in one module definition.
If Py_mod_create is not specified, the import machinery will create a normal module object using
PyModule_New(). The name is taken from spec, not the definition, to allow extension modules to dynami-
cally adjust to their place in the module hierarchy and be imported under different names through symlinks, all
while sharing a single module definition.
There is no requirement for the returned object to be an instance of PyModule_Type. Any type can be used, as
long as it supports setting and getting import-related attributes. However, only PyModule_Type instances may
be returned if the PyModuleDef has non-NULL m_traverse, m_clear, m_free; non-zero m_size; or
slots other than Py_mod_create.
Py_mod_exec
Specifies a function that is called to execute the module. This is equivalent to executing the code of a Python module:
typically, this function adds classes and constants to the module. The signature of the function is:
int exec_module(PyObject* module)
If multiple Py_mod_exec slots are specified, they are processed in the order they appear in the m_slots array.
See PEP 489 for more details on multi-phase initialization.
The following functions are called under the hood when using multi-phase initialization. They can be used di-
rectly, for example when creating module objects dynamically. Note that both PyModule_FromDefAndSpec and
PyModule_ExecDef must be called to fully initialize a module.
PyObject * PyModule_FromDefAndSpec(PyModuleDef *def, PyObject *spec)
Return value: New reference. Create a new module object, given the definition in module and the
ModuleSpec spec. This behaves like PyModule_FromDefAndSpec2() with module_api_version set to
PYTHON_API_VERSION.
New in version 3.5.
PyObject * PyModule_FromDefAndSpec2(PyModuleDef *def, PyObject *spec, int module_api_version)
Return value: New reference. Create a new module object, given the definition in module and the ModuleSpec spec,
assuming the API version module_api_version. If that version does not match the version of the running interpreter,
a RuntimeWarning is emitted.
Note: Most uses of this function should be using PyModule_FromDefAndSpec() instead; only use this if
you are sure you need it.
Support functions
The module initialization function (if using single phase initialization) or a function called from a module execution slot
(if using multi-phase initialization), can use the following functions to help initialize the module state:
int PyModule_AddObject(PyObject *module, const char *name, PyObject *value)
Add an object to module as name. This is a convenience function which can be used from the module’s initialization
function. This steals a reference to value on success. Return -1 on error, 0 on success.
Note: Unlike other functions that steal references, PyModule_AddObject() only decrements the reference
count of value on success.
This means that its return value must be checked, and calling code must Py_DECREF() value manually on error.
Example usage:
Py_INCREF(spam);
if (PyModule_AddObject(module, "spam", spam) < 0) {
Py_DECREF(module);
Py_DECREF(spam);
return NULL;
}
Module lookup
Single-phase initialization creates singleton modules that can be looked up in the context of the current interpreter. This
allows the module object to be retrieved later with only a reference to the module definition.
These functions will not work on modules created using multi-phase initialization, since multiple such modules can be
created from a single definition.
PyObject* PyState_FindModule(PyModuleDef *def)
Return value: Borrowed reference. Returns the module object that was created from def for the current
interpreter. This method requires that the module object has been attached to the interpreter state with
PyState_AddModule() beforehand. In case the corresponding module object is not found or has not been
attached to the interpreter state yet, it returns NULL.
int PyState_AddModule(PyObject *module, PyModuleDef *def)
Attaches the module object passed to the function to the interpreter state. This allows the module object to be
accessible via PyState_FindModule().
Only effective on modules created using single-phase initialization.
Python calls PyState_AddModule automatically after importing a module, so it is unnecessary (but harmless)
to call it from module initialization code. An explicit call is needed only if the module’s own init code subsequently
calls PyState_FindModule. The function is mainly intended for implementing alternative import mechanisms
(either by calling it directly, or by referring to its implementation for details of the required state updates).
The caller must hold the GIL.
Return 0 on success or -1 on failure.
New in version 3.3.
int PyState_RemoveModule(PyModuleDef *def)
Removes the module object created from def from the interpreter state. Return 0 on success or -1 on failure.
The caller must hold the GIL.
New in version 3.3.
Python provides two general-purpose iterator objects. The first, a sequence iterator, works with an arbitrary sequence
supporting the __getitem__() method. The second works with a callable object and a sentinel value, calling the
callable for each item in the sequence, and ending the iteration when the sentinel value is returned.
PyTypeObject PySeqIter_Type
Type object for iterator objects returned by PySeqIter_New() and the one-argument form of the iter()
built-in function for built-in sequence types.
int PySeqIter_Check(op)
Return true if the type of op is PySeqIter_Type. This function always succeeds.
PyObject* PySeqIter_New(PyObject *seq)
Return value: New reference. Return an iterator that works with a general sequence object, seq. The iteration ends
when the sequence raises IndexError for the subscripting operation.
PyTypeObject PyCallIter_Type
Type object for iterator objects returned by PyCallIter_New() and the two-argument form of the iter()
built-in function.
int PyCallIter_Check(op)
Return true if the type of op is PyCallIter_Type. This function always succeeds.
“Descriptors” are objects that describe some attribute of an object. They are found in the dictionary of type objects.
PyTypeObject PyProperty_Type
The type object for the built-in descriptor types.
PyObject* PyDescr_NewGetSet(PyTypeObject *type, struct PyGetSetDef *getset)
Return value: New reference.
PyObject* PyDescr_NewMember(PyTypeObject *type, struct PyMemberDef *meth)
Return value: New reference.
PyObject* PyDescr_NewMethod(PyTypeObject *type, struct PyMethodDef *meth)
Return value: New reference.
PyObject* PyDescr_NewWrapper(PyTypeObject *type, struct wrapperbase *wrapper, void *wrapped)
Return value: New reference.
PyObject* PyDescr_NewClassMethod(PyTypeObject *type, PyMethodDef *method)
Return value: New reference.
int PyDescr_IsData(PyObject *descr)
Return true if the descriptor objects descr describes a data attribute, or false if it describes a method. descr must
be a descriptor object; there is no error checking.
PyObject* PyWrapper_New(PyObject *, PyObject *)
Return value: New reference.
PyTypeObject PySlice_Type
The type object for slice objects. This is the same as slice in the Python layer.
int PySlice_Check(PyObject *ob)
Return true if ob is a slice object; ob must not be NULL. This function always succeeds.
PyObject* PySlice_New(PyObject *start, PyObject *stop, PyObject *step)
Return value: New reference. Return a new slice object with the given values. The start, stop, and step parameters
are used as the values of the slice object attributes of the same names. Any of the values may be NULL, in which
case the None will be used for the corresponding attribute. Return NULL if the new object could not be allocated.
int PySlice_GetIndices(PyObject *slice, Py_ssize_t length, Py_ssize_t *start, Py_ssize_t *stop,
Py_ssize_t *step)
Retrieve the start, stop and step indices from the slice object slice, assuming a sequence of length length. Treats
indices greater than length as errors.
Returns 0 on success and -1 on error with no exception set (unless one of the indices was not None and failed to
be converted to an integer, in which case -1 is returned with an exception set).
You probably do not want to use this function.
Changed in version 3.2: The parameter type for the slice parameter was PySliceObject* before.
Note: This function is considered not safe for resizable sequences. Its invocation should be replaced by a combi-
nation of PySlice_Unpack() and PySlice_AdjustIndices() where
is replaced by
Changed in version 3.2: The parameter type for the slice parameter was PySliceObject* before.
Changed in version 3.6.1: If Py_LIMITED_API is not set or set to the value between 0x03050400 and
0x03060000 (not including) or 0x03060100 or higher PySlice_GetIndicesEx() is implemented as a
macro using PySlice_Unpack() and PySlice_AdjustIndices(). Arguments start, stop and step are
evaluated more than once.
Deprecated since version 3.6.1: If Py_LIMITED_API is set to the value less than 0x03050400 or between
0x03060000 and 0x03060100 (not including) PySlice_GetIndicesEx() is a deprecated function.
int PySlice_Unpack(PyObject *slice, Py_ssize_t *start, Py_ssize_t *stop, Py_ssize_t *step)
Extract the start, stop and step data members from a slice object as C integers. Silently reduce values
larger than PY_SSIZE_T_MAX to PY_SSIZE_T_MAX, silently boost the start and stop values less than
PY_SSIZE_T_MIN to PY_SSIZE_T_MIN, and silently boost the step values less than -PY_SSIZE_T_MAX
to -PY_SSIZE_T_MAX.
Return -1 on error, 0 on success.
New in version 3.6.1.
Py_ssize_t PySlice_AdjustIndices(Py_ssize_t length, Py_ssize_t *start, Py_ssize_t *stop,
Py_ssize_t step)
Adjust start/end slice indices assuming a sequence of the specified length. Out of bounds indices are clipped in a
manner consistent with the handling of normal slices.
Return the length of the slice. Always successful. Doesn’t call Python code.
New in version 3.6.1.
PyObject *Py_Ellipsis
The Python Ellipsis object. This object has no methods. It needs to be treated just like any other object with
respect to reference counts. Like Py_None it is a singleton object.
A memoryview object exposes the C level buffer interface as a Python object which can then be passed around like any
other object.
PyObject *PyMemoryView_FromObject(PyObject *obj)
Return value: New reference. Create a memoryview object from an object that provides the buffer interface. If obj
supports writable buffer exports, the memoryview object will be read/write, otherwise it may be either read-only
or read/write at the discretion of the exporter.
PyObject *PyMemoryView_FromMemory(char *mem, Py_ssize_t size, int flags)
Return value: New reference. Create a memoryview object using mem as the underlying buffer. flags can be one of
PyBUF_READ or PyBUF_WRITE.
New in version 3.3.
PyObject *PyMemoryView_FromBuffer(Py_buffer *view)
Return value: New reference. Create a memoryview object wrapping the given buffer structure view. For simple
byte buffers, PyMemoryView_FromMemory() is the preferred function.
PyObject *PyMemoryView_GetContiguous(PyObject *obj, int buffertype, char order)
Return value: New reference. Create a memoryview object to a contiguous chunk of memory (in either ‘C’ or
‘F’ortran order) from an object that defines the buffer interface. If memory is contiguous, the memoryview object
points to the original memory. Otherwise, a copy is made and the memoryview points to a new bytes object.
int PyMemoryView_Check(PyObject *obj)
Return true if the object obj is a memoryview object. It is not currently allowed to create subclasses of
memoryview. This function always succeeds.
Py_buffer *PyMemoryView_GET_BUFFER(PyObject *mview)
Return a pointer to the memoryview’s private copy of the exporter’s buffer. mview must be a memoryview instance;
this macro doesn’t check its type, you must do it yourself or you will risk crashes.
Py_buffer *PyMemoryView_GET_BASE(PyObject *mview)
Return either a pointer to the exporting object that the memoryview is based on or NULL if
the memoryview has been created by one of the functions PyMemoryView_FromMemory() or
PyMemoryView_FromBuffer(). mview must be a memoryview instance.
Python supports weak references as first-class objects. There are two specific object types which directly implement weak
references. The first is a simple reference object, and the second acts as a proxy for the original object as much as it can.
int PyWeakref_Check(ob)
Return true if ob is either a reference or proxy object. This function always succeeds.
int PyWeakref_CheckRef(ob)
Return true if ob is a reference object. This function always succeeds.
int PyWeakref_CheckProxy(ob)
Return true if ob is a proxy object. This function always succeeds.
Note: This function returns a borrowed reference to the referenced object. This means that you should always
call Py_INCREF() on the object except if you know that it cannot be destroyed while you are still using it.
8.6.9 Capsules
If the destructor argument is not NULL, it will be called with the capsule as its argument when it is destroyed.
If this capsule will be stored as an attribute of a module, the name should be specified as modulename.
attributename. This will enable other modules to import the capsule using PyCapsule_Import().
void* PyCapsule_GetPointer(PyObject *capsule, const char *name)
Retrieve the pointer stored in the capsule. On failure, set an exception and return NULL.
The name parameter must compare exactly to the name stored in the capsule. If the name stored in the capsule is
NULL, the name passed in must also be NULL. Python uses the C function strcmp() to compare capsule names.
PyCapsule_Destructor PyCapsule_GetDestructor(PyObject *capsule)
Return the current destructor stored in the capsule. On failure, set an exception and return NULL.
It is legal for a capsule to have a NULL destructor. This makes a NULL return code somewhat ambiguous; use
PyCapsule_IsValid() or PyErr_Occurred() to disambiguate.
void* PyCapsule_GetContext(PyObject *capsule)
Return the current context stored in the capsule. On failure, set an exception and return NULL.
It is legal for a capsule to have a NULL context. This makes a NULL return code somewhat ambiguous; use
PyCapsule_IsValid() or PyErr_Occurred() to disambiguate.
const char* PyCapsule_GetName(PyObject *capsule)
Return the current name stored in the capsule. On failure, set an exception and return NULL.
It is legal for a capsule to have a NULL name. This makes a NULL return code somewhat ambiguous; use
PyCapsule_IsValid() or PyErr_Occurred() to disambiguate.
void* PyCapsule_Import(const char *name, int no_block)
Import a pointer to a C object from a capsule attribute in a module. The name parameter should specify the full
name to the attribute, as in module.attribute. The name stored in the capsule must match this string exactly.
If no_block is true, import the module without blocking (using PyImport_ImportModuleNoBlock()). If
no_block is false, import the module conventionally (using PyImport_ImportModule()).
Return the capsule’s internal pointer on success. On failure, set an exception and return NULL.
int PyCapsule_IsValid(PyObject *capsule, const char *name)
Determines whether or not capsule is a valid capsule. A valid capsule is non-NULL, passes
PyCapsule_CheckExact(), has a non-NULL pointer stored in it, and its internal name matches the
name parameter. (See PyCapsule_GetPointer() for information on how capsule names are compared.)
In other words, if PyCapsule_IsValid() returns a true value, calls to any of the accessors (any function
starting with PyCapsule_Get()) are guaranteed to succeed.
Return a nonzero value if the object is valid and matches the name passed in. Return 0 otherwise. This function
will not fail.
int PyCapsule_SetContext(PyObject *capsule, void *context)
Set the context pointer inside capsule to context.
Return 0 on success. Return nonzero and set an exception on failure.
int PyCapsule_SetDestructor(PyObject *capsule, PyCapsule_Destructor destructor)
Set the destructor inside capsule to destructor.
Return 0 on success. Return nonzero and set an exception on failure.
int PyCapsule_SetName(PyObject *capsule, const char *name)
Set the name inside capsule to name. If non-NULL, the name must outlive the capsule. If the previous name stored
in the capsule was not NULL, no attempt is made to free it.
Return 0 on success. Return nonzero and set an exception on failure.
Generator objects are what Python uses to implement generator iterators. They are normally created by iterating over a
function that yields values, rather than explicitly calling PyGen_New() or PyGen_NewWithQualName().
PyGenObject
The C structure used for generator objects.
PyTypeObject PyGen_Type
The type object corresponding to generator objects.
int PyGen_Check(PyObject *ob)
Return true if ob is a generator object; ob must not be NULL. This function always succeeds.
int PyGen_CheckExact(PyObject *ob)
Return true if ob’s type is PyGen_Type; ob must not be NULL. This function always succeeds.
PyObject* PyGen_New(PyFrameObject *frame)
Return value: New reference. Create and return a new generator object based on the frame object. A reference to
frame is stolen by this function. The argument must not be NULL.
PyObject* PyGen_NewWithQualName(PyFrameObject *frame, PyObject *name, PyObject *qualname)
Return value: New reference. Create and return a new generator object based on the frame object, with __name__
and __qualname__ set to name and qualname. A reference to frame is stolen by this function. The frame
argument must not be NULL.
Note: Changed in version 3.7.1: In Python 3.7.1 the signatures of all context variables C APIs were changed to use
PyObject pointers instead of PyContext, PyContextVar, and PyContextToken, e.g.:
// in 3.7.0:
PyContext *PyContext_New(void);
// in 3.7.1+:
PyObject *PyContext_New(void);
Various date and time objects are supplied by the datetime module. Before using any of these functions, the header
file datetime.h must be included in your source (note that this is not included by Python.h), and the macro
PyDateTime_IMPORT must be invoked, usually as part of the module initialisation function. The macro puts a pointer
to a C structure into a static variable, PyDateTimeAPI, that is used by the following macros.
Macro for access to the UTC singleton:
PyObject* PyDateTime_TimeZone_UTC
Returns the time zone singleton representing UTC, the same object as datetime.timezone.utc.
New in version 3.7.
Type-check macros:
int PyDate_Check(PyObject *ob)
Return true if ob is of type PyDateTime_DateType or a subtype of PyDateTime_DateType. ob must
not be NULL. This function always succeeds.
int PyDate_CheckExact(PyObject *ob)
Return true if ob is of type PyDateTime_DateType. ob must not be NULL. This function always succeeds.
int PyDateTime_Check(PyObject *ob)
Return true if ob is of type PyDateTime_DateTimeType or a subtype of PyDateTime_DateTimeType.
ob must not be NULL. This function always succeeds.
Various built-in types for type hinting are provided. Only GenericAlias is exposed to C.
PyObject* Py_GenericAlias(PyObject *origin, PyObject *args)
Create a GenericAlias object. Equivalent to calling the Python class types.GenericAlias. The origin and
args arguments set the GenericAlias’s __origin__ and __args__ attributes respectively. origin should
be a PyTypeObject*, and args can be a PyTupleObject* or any PyObject*. If args passed is not a
tuple, a 1-tuple is automatically constructed and __args__ is set to (args,). Minimal checking is done for the
arguments, so the function will succeed even if origin is not a type. The GenericAlias’s __parameters__
attribute is constructed lazily from __args__. On failure, an exception is raised and NULL is returned.
Here’s an example of how to make an extension type generic:
...
static PyMethodDef my_obj_methods[] = {
// Other methods.
...
{"__class_getitem__", (PyCFunction)Py_GenericAlias, METH_O|METH_CLASS, "See␣
,→PEP 585"}
...
}
See also:
The data model method __class_getitem__().
New in version 3.9.
PyTypeObject Py_GenericAliasType
The C type of the object returned by Py_GenericAlias(). Equivalent to types.GenericAlias in
Python.
New in version 3.9.
NINE
In an application embedding Python, the Py_Initialize() function must be called before using any other Python/C
API functions; with the exception of a few functions and the global configuration variables.
The following functions can be safely called before Python is initialized:
• Configuration functions:
– PyImport_AppendInittab()
– PyImport_ExtendInittab()
– PyInitFrozenExtensions()
– PyMem_SetAllocator()
– PyMem_SetupDebugHooks()
– PyObject_SetArenaAllocator()
– Py_SetPath()
– Py_SetProgramName()
– Py_SetPythonHome()
– Py_SetStandardStreamEncoding()
– PySys_AddWarnOption()
– PySys_AddXOption()
– PySys_ResetWarnOptions()
• Informative functions:
– Py_IsInitialized()
– PyMem_GetAllocator()
– PyObject_GetArenaAllocator()
– Py_GetBuildInfo()
– Py_GetCompiler()
– Py_GetCopyright()
143
The Python/C API, Release 3.9.6
– Py_GetPlatform()
– Py_GetVersion()
• Utilities:
– Py_DecodeLocale()
• Memory allocators:
– PyMem_RawMalloc()
– PyMem_RawRealloc()
– PyMem_RawCalloc()
– PyMem_RawFree()
Note: The following functions should not be called before Py_Initialize(): Py_EncodeLocale(),
Py_GetPath(), Py_GetPrefix(), Py_GetExecPrefix(), Py_GetProgramFullPath(),
Py_GetPythonHome(), Py_GetProgramName() and PyEval_InitThreads().
Python has variables for the global configuration to control different features and options. By default, these flags are
controlled by command line options.
When a flag is set by an option, the value of the flag is the number of times that the option was set. For example, -b sets
Py_BytesWarningFlag to 1 and -bb sets Py_BytesWarningFlag to 2.
int Py_BytesWarningFlag
Issue a warning when comparing bytes or bytearray with str or bytes with int. Issue an error if greater
or equal to 2.
Set by the -b option.
int Py_DebugFlag
Turn on parser debugging output (for expert only, depending on compilation options).
Set by the -d option and the PYTHONDEBUG environment variable.
int Py_DontWriteBytecodeFlag
If set to non-zero, Python won’t try to write .pyc files on the import of source modules.
Set by the -B option and the PYTHONDONTWRITEBYTECODE environment variable.
int Py_FrozenFlag
Suppress error messages when calculating the module search path in Py_GetPath().
Private flag used by _freeze_importlib and frozenmain programs.
int Py_HashRandomizationFlag
Set to 1 if the PYTHONHASHSEED environment variable is set to a non-empty string.
If the flag is non-zero, read the PYTHONHASHSEED environment variable to initialize the secret hash seed.
int Py_IgnoreEnvironmentFlag
Ignore all PYTHON* environment variables, e.g. PYTHONPATH and PYTHONHOME, that might be set.
Set by the -E and -I options.
int Py_InspectFlag
When a script is passed as first argument or the -c option is used, enter interactive mode after executing the script
or the command, even when sys.stdin does not appear to be a terminal.
Set by the -i option and the PYTHONINSPECT environment variable.
int Py_InteractiveFlag
Set by the -i option.
int Py_IsolatedFlag
Run Python in isolated mode. In isolated mode sys.path contains neither the script’s directory nor the user’s
site-packages directory.
Set by the -I option.
New in version 3.4.
int Py_LegacyWindowsFSEncodingFlag
If the flag is non-zero, use the mbcs encoding instead of the UTF-8 encoding for the filesystem encoding.
Set to 1 if the PYTHONLEGACYWINDOWSFSENCODING environment variable is set to a non-empty string.
See PEP 529 for more details.
Availability: Windows.
int Py_LegacyWindowsStdioFlag
If the flag is non-zero, use io.FileIO instead of WindowsConsoleIO for sys standard streams.
Set to 1 if the PYTHONLEGACYWINDOWSSTDIO environment variable is set to a non-empty string.
See PEP 528 for more details.
Availability: Windows.
int Py_NoSiteFlag
Disable the import of the module site and the site-dependent manipulations of sys.path that it entails. Also
disable these manipulations if site is explicitly imported later (call site.main() if you want them to be
triggered).
Set by the -S option.
int Py_NoUserSiteDirectory
Don’t add the user site-packages directory to sys.path.
Set by the -s and -I options, and the PYTHONNOUSERSITE environment variable.
int Py_OptimizeFlag
Set by the -O option and the PYTHONOPTIMIZE environment variable.
int Py_QuietFlag
Don’t display the copyright and version messages even in interactive mode.
Set by the -q option.
New in version 3.2.
int Py_UnbufferedStdioFlag
Force the stdout and stderr streams to be unbuffered.
Set by the -u option and the PYTHONUNBUFFERED environment variable.
int Py_VerboseFlag
Print a message each time a module is initialized, showing the place (filename or built-in module) from which it is
loaded. If greater or equal to 2, print a message for each file that is checked for when searching for a module. Also
provides information on module cleanup at exit.
void Py_Initialize()
Initialize the Python interpreter. In an application embedding Python, this should be called before using any other
Python/C API functions; see Before Python Initialization for the few exceptions.
This initializes the table of loaded modules (sys.modules), and creates the fundamental modules builtins,
__main__ and sys. It also initializes the module search path (sys.path). It does not set sys.
argv; use PySys_SetArgvEx() for that. This is a no-op when called for a second time (without calling
Py_FinalizeEx() first). There is no return value; it is a fatal error if the initialization fails.
Note: On Windows, changes the console mode from O_TEXT to O_BINARY, which will also affect non-Python
uses of the console using the C Runtime.
to the empty string. Note that compiled Python bytecode files are platform independent (but not independent from
the Python version by which they were compiled!).
System administrators will know how to configure the mount or automount programs to share /usr/local
between platforms while having /usr/local/plat be a different filesystem for each platform.
wchar_t* Py_GetProgramFullPath()
Return the full program name of the Python executable; this is computed as a side-effect of deriving the default
module search path from the program name (set by Py_SetProgramName() above). The returned string
points into static storage; the caller should not modify its value. The value is available to Python code as sys.
executable.
wchar_t* Py_GetPath()
Return the default module search path; this is computed from the program name (set by
Py_SetProgramName() above) and some environment variables. The returned string consists of a se-
ries of directory names separated by a platform dependent delimiter character. The delimiter character is ':'
on Unix and Mac OS X, ';' on Windows. The returned string points into static storage; the caller should not
modify its value. The list sys.path is initialized with this value on interpreter startup; it can be (and usually is)
modified later to change the search path for loading modules.
void Py_SetPath(const wchar_t *)
Set the default module search path. If this function is called before Py_Initialize(), then Py_GetPath()
won’t attempt to compute a default search path but uses the one provided instead. This is useful if Python is
embedded by an application that has full knowledge of the location of all modules. The path components should be
separated by the platform dependent delimiter character, which is ':' on Unix and Mac OS X, ';' on Windows.
This also causes sys.executable to be set to the program full path (see Py_GetProgramFullPath())
and for sys.prefix and sys.exec_prefix to be empty. It is up to the caller to modify these if required
after calling Py_Initialize().
Use Py_DecodeLocale() to decode a bytes string to get a wchar_* string.
The path argument is copied internally, so the caller may free it after the call completes.
Changed in version 3.8: The program full path is now used for sys.executable, instead of the program name.
const char* Py_GetVersion()
Return the version of this Python interpreter. This is a string that looks something like
The first word (up to the first space character) is the current Python version; the first three characters are the major
and minor version separated by a period. The returned string points into static storage; the caller should not modify
its value. The value is available to Python code as sys.version.
const char* Py_GetPlatform()
Return the platform identifier for the current platform. On Unix, this is formed from the “official” name of the
operating system, converted to lower case, followed by the major revision number; e.g., for Solaris 2.x, which is
also known as SunOS 5.x, the value is 'sunos5'. On Mac OS X, it is 'darwin'. On Windows, it is 'win'.
The returned string points into static storage; the caller should not modify its value. The value is available to Python
code as sys.platform.
const char* Py_GetCopyright()
Return the official copyright string for the current Python version, for example
'Copyright 1991-1995 Stichting Mathematisch Centrum, Amsterdam'
The returned string points into static storage; the caller should not modify its value. The value is available to Python
code as sys.copyright.
"[GCC 2.7.2.2]"
The returned string points into static storage; the caller should not modify its value. The value is available to Python
code as part of the variable sys.version.
const char* Py_GetBuildInfo()
Return information about the sequence number and build date and time of the current Python interpreter instance,
for example
The returned string points into static storage; the caller should not modify its value. The value is available to Python
code as part of the variable sys.version.
void PySys_SetArgvEx(int argc, wchar_t **argv, int updatepath)
Set sys.argv based on argc and argv. These parameters are similar to those passed to the program’s main()
function with the difference that the first entry should refer to the script file to be executed rather than the executable
hosting the Python interpreter. If there isn’t a script that will be run, the first entry in argv can be an empty string.
If this function fails to initialize sys.argv, a fatal condition is signalled using Py_FatalError().
If updatepath is zero, this is all the function does. If updatepath is non-zero, the function also modifies sys.path
according to the following algorithm:
• If the name of an existing script is passed in argv[0], the absolute path of the directory where the script
is located is prepended to sys.path.
• Otherwise (that is, if argc is 0 or argv[0] doesn’t point to an existing file name), an empty string is
prepended to sys.path, which is the same as prepending the current working directory (".").
Use Py_DecodeLocale() to decode a bytes string to get a wchar_* string.
Note: It is recommended that applications embedding the Python interpreter for purposes other than executing a
single script pass 0 as updatepath, and update sys.path themselves if desired. See CVE-2008-5983.
On versions before 3.1.3, you can achieve the same effect by manually popping the first sys.path element after
having called PySys_SetArgv(), for example using:
w_char* Py_GetPythonHome()
Return the default “home”, that is, the value set by a previous call to Py_SetPythonHome(), or the value of
the PYTHONHOME environment variable if it is set.
The Python interpreter is not fully thread-safe. In order to support multi-threaded Python programs, there’s a global lock,
called the global interpreter lock or GIL, that must be held by the current thread before it can safely access Python objects.
Without the lock, even the simplest operations could cause problems in a multi-threaded program: for example, when
two threads simultaneously increment the reference count of the same object, the reference count could end up being
incremented only once instead of twice.
Therefore, the rule exists that only the thread that has acquired the GIL may operate on Python objects or call Python/C
API functions. In order to emulate concurrency of execution, the interpreter regularly tries to switch threads (see sys.
setswitchinterval()). The lock is also released around potentially blocking I/O operations like reading or writing
a file, so that other Python threads can run in the meantime.
The Python interpreter keeps some thread-specific bookkeeping information inside a data structure called
PyThreadState. There’s also one global variable pointing to the current PyThreadState: it can be retrieved
using PyThreadState_Get().
Most extension code manipulating the GIL has the following simple structure:
Py_BEGIN_ALLOW_THREADS
... Do some blocking I/O operation ...
Py_END_ALLOW_THREADS
The Py_BEGIN_ALLOW_THREADS macro opens a new block and declares a hidden local variable; the
Py_END_ALLOW_THREADS macro closes the block.
The block above expands to the following code:
PyThreadState *_save;
_save = PyEval_SaveThread();
... Do some blocking I/O operation ...
PyEval_RestoreThread(_save);
Here is how these functions work: the global interpreter lock is used to protect the pointer to the current thread state.
When releasing the lock and saving the thread state, the current thread state pointer must be retrieved before the lock is
released (since another thread could immediately acquire the lock and store its own thread state in the global variable).
Conversely, when acquiring the lock and restoring the thread state, the lock must be acquired before storing the thread
state pointer.
Note: Calling system I/O functions is the most common use case for releasing the GIL, but it can also be useful before
calling long-running computations which don’t need access to Python objects, such as compression or cryptographic
functions operating over memory buffers. For example, the standard zlib and hashlib modules release the GIL
when compressing or hashing data.
When threads are created using the dedicated Python APIs (such as the threading module), a thread state is auto-
matically associated to them and the code showed above is therefore correct. However, when threads are created from
C (for example by a third-party library with its own thread management), they don’t hold the GIL, nor is there a thread
state structure for them.
If you need to call Python code from these threads (often this will be part of a callback API provided by the aforementioned
third-party library), you must first register these threads with the interpreter by creating a thread state data structure, then
acquiring the GIL, and finally storing their thread state pointer, before you can start using the Python/C API. When you
are done, you should reset the thread state pointer, release the GIL, and finally free the thread state data structure.
The PyGILState_Ensure() and PyGILState_Release() functions do all of the above automatically. The
typical idiom for calling into Python from a C thread is:
PyGILState_STATE gstate;
gstate = PyGILState_Ensure();
Note that the PyGILState_*() functions assume there is only one global interpreter (created automatically by
Py_Initialize()). Python supports the creation of additional interpreters (using Py_NewInterpreter()),
but mixing multiple interpreters and the PyGILState_*() API is unsupported.
Another important thing to note about threads is their behaviour in the face of the C fork() call. On most systems with
fork(), after a process forks only the thread that issued the fork will exist. This has a concrete impact both on how
locks must be handled and on all stored state in CPython’s runtime.
The fact that only the “current” thread remains means any locks held by other threads will never be released. Python solves
this for os.fork() by acquiring the locks it uses internally before the fork, and releasing them afterwards. In addition, it
resets any lock-objects in the child. When extending or embedding Python, there is no way to inform Python of additional
(non-Python) locks that need to be acquired before or reset after a fork. OS facilities such as pthread_atfork()
would need to be used to accomplish the same thing. Additionally, when extending or embedding Python, calling fork()
directly rather than through os.fork() (and returning to or calling into Python) may result in a deadlock by one of
Python’s internal locks being held by a thread that is defunct after the fork. PyOS_AfterFork_Child() tries to
reset the necessary locks, but is not always able to.
The fact that all other threads go away also means that CPython’s runtime state there must be cleaned up properly, which
os.fork() does. This means finalizing all other PyThreadState objects belonging to the current interpreter and
all other PyInterpreterState objects. Due to this and the special nature of the “main” interpreter, fork() should
only be called in that interpreter’s “main” thread, where the CPython global runtime was originally initialized. The only
exception is if exec() will be called immediately after.
These are the most commonly used types and functions when writing C extension code, or when embedding the Python
interpreter:
PyInterpreterState
This data structure represents the state shared by a number of cooperating threads. Threads belonging to the same
interpreter share their module administration and a few other internal items. There are no public members in this
structure.
Threads belonging to different interpreters initially share nothing, except process state like available memory, open
file descriptors and such. The global interpreter lock is also shared by all threads, regardless of to which interpreter
they belong.
PyThreadState
This data structure represents the state of a single thread. The only public data member is interp
(PyInterpreterState *), which points to this thread’s interpreter state.
void PyEval_InitThreads()
Deprecated function which does nothing.
In Python 3.6 and older, this function created the GIL if it didn’t exist.
Changed in version 3.9: The function now does nothing.
Changed in version 3.7: This function is now called by Py_Initialize(), so you don’t have to call it yourself
anymore.
Changed in version 3.2: This function cannot be called before Py_Initialize() anymore.
Deprecated since version 3.9, will be removed in version 3.11.
int PyEval_ThreadsInitialized()
Returns a non-zero value if PyEval_InitThreads() has been called. This function can be called without
holding the GIL, and therefore can be used to avoid calls to the locking API when running single-threaded.
Changed in version 3.7: The GIL is now initialized by Py_Initialize().
Deprecated since version 3.9, will be removed in version 3.11.
PyThreadState* PyEval_SaveThread()
Release the global interpreter lock (if it has been created) and reset the thread state to NULL, returning the previous
thread state (which is not NULL). If the lock has been created, the current thread must have acquired it.
void PyEval_RestoreThread(PyThreadState *tstate)
Acquire the global interpreter lock (if it has been created) and set the thread state to tstate, which must not be
NULL. If the lock has been created, the current thread must not have acquired it, otherwise deadlock ensues.
Note: Calling this function from a thread when the runtime is finalizing will terminate the thread, even if the
thread was not created by Python. You can use _Py_IsFinalizing() or sys.is_finalizing() to
check if the interpreter is in process of being finalized before calling this function to avoid unwanted termination.
PyThreadState* PyThreadState_Get()
Return the current thread state. The global interpreter lock must be held. When the current thread state is NULL,
this issues a fatal error (so that the caller needn’t check for NULL).
Note: Calling this function from a thread when the runtime is finalizing will terminate the thread, even if the
thread was not created by Python. You can use _Py_IsFinalizing() or sys.is_finalizing() to
check if the interpreter is in process of being finalized before calling this function to avoid unwanted termination.
void PyGILState_Release(PyGILState_STATE)
Release any resources previously acquired. After this call, Python’s state will be the same as it was prior to the
corresponding PyGILState_Ensure() call (but generally this state will be unknown to the caller, hence the
use of the GILState API).
Every call to PyGILState_Ensure() must be matched by a call to PyGILState_Release() on the same
thread.
PyThreadState* PyGILState_GetThisThreadState()
Get the current thread state for this thread. May return NULL if no GILState API has been used on the current
thread. Note that the main thread always has such a thread-state, even if no auto-thread-state call has been made
on the main thread. This is mainly a helper/diagnostic function.
int PyGILState_Check()
Return 1 if the current thread is holding the GIL and 0 otherwise. This function can be called from any thread
at any time. Only if it has had its Python thread state initialized and currently is holding the GIL will it return 1.
This is mainly a helper/diagnostic function. It can be useful for example in callback contexts or memory allocation
functions when knowing that the GIL is locked can allow the caller to perform sensitive actions or otherwise behave
differently.
New in version 3.4.
The following macros are normally used without a trailing semicolon; look for example usage in the Python source
distribution.
Py_BEGIN_ALLOW_THREADS
This macro expands to { PyThreadState *_save; _save = PyEval_SaveThread();. Note that
it contains an opening brace; it must be matched with a following Py_END_ALLOW_THREADS macro. See above
for further discussion of this macro.
Py_END_ALLOW_THREADS
This macro expands to PyEval_RestoreThread(_save); }. Note that it contains a closing brace; it must
be matched with an earlier Py_BEGIN_ALLOW_THREADS macro. See above for further discussion of this macro.
Py_BLOCK_THREADS
This macro expands to PyEval_RestoreThread(_save);: it is equivalent to
Py_END_ALLOW_THREADS without the closing brace.
Py_UNBLOCK_THREADS
This macro expands to _save = PyEval_SaveThread();: it is equivalent to
Py_BEGIN_ALLOW_THREADS without the opening brace and variable declaration.
Note: Calling this function from a thread when the runtime is finalizing will terminate the thread, even if the
thread was not created by Python. You can use _Py_IsFinalizing() or sys.is_finalizing() to
check if the interpreter is in process of being finalized before calling this function to avoid unwanted termination.
Note: Calling this function from a thread when the runtime is finalizing will terminate the thread, even if the
thread was not created by Python. You can use _Py_IsFinalizing() or sys.is_finalizing() to
check if the interpreter is in process of being finalized before calling this function to avoid unwanted termination.
While in most uses, you will only embed a single Python interpreter, there are cases where you need to create several
independent interpreters in the same process and perhaps even in the same thread. Sub-interpreters allow you to do that.
The “main” interpreter is the first one created when the runtime initializes. It is usually the only Python interpreter in a
process. Unlike sub-interpreters, the main interpreter has unique process-global responsibilities like signal handling. It is
also responsible for execution during runtime initialization and is usually the active interpreter during runtime finalization.
The PyInterpreterState_Main() function returns a pointer to its state.
You can switch between sub-interpreters using the PyThreadState_Swap() function. You can create and destroy
them using the following functions:
PyThreadState* Py_NewInterpreter()
Create a new sub-interpreter. This is an (almost) totally separate environment for the execution of Python code.
In particular, the new interpreter has separate, independent versions of all imported modules, including the fun-
damental modules builtins, __main__ and sys. The table of loaded modules (sys.modules) and the
module search path (sys.path) are also separate. The new environment has no sys.argv variable. It has
new standard I/O stream file objects sys.stdin, sys.stdout and sys.stderr (however these refer to the
same underlying file descriptors).
The return value points to the first thread state created in the new sub-interpreter. This thread state is made in the
current thread state. Note that no actual thread is created; see the discussion of thread states below. If creation
of the new interpreter is unsuccessful, NULL is returned; no exception is set since the exception state is stored in
the current thread state and there may not be a current thread state. (Like all other Python/C API functions, the
global interpreter lock must be held before calling this function and is still held when it returns; however, unlike
most other Python/C API functions, there needn’t be a current thread state on entry.)
Extension modules are shared between (sub-)interpreters as follows:
• For modules using multi-phase initialization, e.g. PyModule_FromDefAndSpec(), a separate mod-
ule object is created and initialized for each interpreter. Only C-level static and global variables are shared
between these module objects.
• For modules using single-phase initialization, e.g. PyModule_Create(), the first time a particular exten-
sion is imported, it is initialized normally, and a (shallow) copy of its module’s dictionary is squirreled away.
When the same extension is imported by another (sub-)interpreter, a new module is initialized and filled with
the contents of this copy; the extension’s init function is not called. Objects in the module’s dictionary thus
end up shared across (sub-)interpreters, which might cause unwanted behavior (see Bugs and caveats below).
Note that this is different from what happens when an extension is imported after the interpreter has been
completely re-initialized by calling Py_FinalizeEx() and Py_Initialize(); in that case, the ex-
tension’s initmodule function is called again. As with multi-phase initialization, this means that only
C-level static and global variables are shared between these modules.
void Py_EndInterpreter(PyThreadState *tstate)
Destroy the (sub-)interpreter represented by the given thread state. The given thread state must be the current
thread state. See the discussion of thread states below. When the call returns, the current thread state is NULL. All
thread states associated with this interpreter are destroyed. (The global interpreter lock must be held before calling
this function and is still held when it returns.) Py_FinalizeEx() will destroy all sub-interpreters that haven’t
been explicitly destroyed at that point.
Because sub-interpreters (and the main interpreter) are part of the same process, the insulation between them isn’t perfect
— for example, using low-level file operations like os.close() they can (accidentally or maliciously) affect each other’s
open files. Because of the way extensions are shared between (sub-)interpreters, some extensions may not work properly;
this is especially likely when using single-phase initialization or (static) global variables. It is possible to insert objects
created in one sub-interpreter into a namespace of another (sub-)interpreter; this should be avoided if possible.
Special care should be taken to avoid sharing user-defined functions, methods, instances or classes between sub-
interpreters, since import operations executed by such objects may affect the wrong (sub-)interpreter’s dictionary of
loaded modules. It is equally important to avoid sharing objects from which the above are reachable.
Also note that combining this functionality with PyGILState_*() APIs is delicate, because these APIs assume a
bijection between Python thread states and OS-level threads, an assumption broken by the presence of sub-interpreters.
It is highly recommended that you don’t switch sub-interpreters between a pair of matching PyGILState_Ensure()
and PyGILState_Release() calls. Furthermore, extensions (such as ctypes) using these APIs to allow calling
of Python code from non-Python created threads will probably be broken when using sub-interpreters.
A mechanism is provided to make asynchronous notifications to the main interpreter thread. These notifications take the
form of a function pointer and a void pointer argument.
int Py_AddPendingCall(int (*func)(void *), void *arg)
Schedule a function to be called from the main interpreter thread. On success, 0 is returned and func is queued for
being called in the main thread. On failure, -1 is returned without setting any exception.
When successfully queued, func will be eventually called from the main interpreter thread with the argument arg.
It will be called asynchronously with respect to normally running Python code, but with both these conditions met:
• on a bytecode boundary;
• with the main thread holding the global interpreter lock (func can therefore use the full C API).
func must return 0 on success, or -1 on failure with an exception set. func won’t be interrupted to perform another
asynchronous notification recursively, but it can still be interrupted to switch threads if the global interpreter lock
is released.
This function doesn’t need a current thread state to run, and it doesn’t need the global interpreter lock.
To call this function in a subinterpreter, the caller must hold the GIL. Otherwise, the function func can be scheduled
to be called from the wrong interpreter.
Warning: This is a low-level function, only useful for very special cases. There is no guarantee that func will
be called as quick as possible. If the main thread is busy executing a system call, func won’t be called before
the system call returns. This function is generally not suitable for calling Python code from arbitrary C threads.
Instead, use the PyGILState API.
Changed in version 3.9: If this function is called in a subinterpreter, the function func is now scheduled to be called
from the subinterpreter, rather than being called from the main interpreter. Each subinterpreter now has its own
list of scheduled calls.
New in version 3.1.
The Python interpreter provides some low-level support for attaching profiling and execution tracing facilities. These are
used for profiling, debugging, and coverage analysis tools.
This C interface allows the profiling or tracing code to avoid the overhead of calling through Python-level callable objects,
making a direct C function call instead. The essential attributes of the facility have not changed; the interface allows trace
functions to be installed per-thread, and the basic events reported to the trace function are the same as had been reported
to the Python-level trace functions in previous versions.
int (*Py_tracefunc)(PyObject *obj, PyFrameObject *frame, int what, PyObject *arg)
The type of the trace function registered using PyEval_SetProfile() and PyEval_SetTrace(). The
first parameter is the object passed to the registration function as obj, frame is the frame object to which the event
pertains, what is one of the constants PyTrace_CALL, PyTrace_EXCEPTION, PyTrace_LINE,
PyTrace_RETURN, PyTrace_C_CALL, PyTrace_C_EXCEPTION, PyTrace_C_RETURN, or
PyTrace_OPCODE, and arg depends on the value of what:
int PyTrace_CALL
The value of the what parameter to a Py_tracefunc function when a new call to a function or method is being
reported, or a new entry into a generator. Note that the creation of the iterator for a generator function is not
reported as there is no control transfer to the Python bytecode in the corresponding frame.
int PyTrace_EXCEPTION
The value of the what parameter to a Py_tracefunc function when an exception has been raised. The callback
function is called with this value for what when after any bytecode is processed after which the exception becomes
set within the frame being executed. The effect of this is that as exception propagation causes the Python stack to
unwind, the callback is called upon return to each frame as the exception propagates. Only trace functions receives
these events; they are not needed by the profiler.
int PyTrace_LINE
The value passed as the what parameter to a Py_tracefunc function (but not a profiling function) when a
line-number event is being reported. It may be disabled for a frame by setting f_trace_lines to 0 on that
frame.
int PyTrace_RETURN
The value for the what parameter to Py_tracefunc functions when a call is about to return.
int PyTrace_C_CALL
The value for the what parameter to Py_tracefunc functions when a C function is about to be called.
int PyTrace_C_EXCEPTION
The value for the what parameter to Py_tracefunc functions when a C function has raised an exception.
int PyTrace_C_RETURN
The value for the what parameter to Py_tracefunc functions when a C function has returned.
int PyTrace_OPCODE
The value for the what parameter to Py_tracefunc functions (but not profiling functions) when a new op-
code is about to be executed. This event is not emitted by default: it must be explicitly requested by setting
f_trace_opcodes to 1 on the frame.
void PyEval_SetProfile(Py_tracefunc func, PyObject *obj)
Set the profiler function to func. The obj parameter is passed to the function as its first parameter, and may be
any Python object, or NULL. If the profile function needs to maintain state, using a different value for obj for each
thread provides a convenient and thread-safe place to store it. The profile function is called for all monitored events
except PyTrace_LINE PyTrace_OPCODE and PyTrace_EXCEPTION.
The caller must hold the GIL.
void PyEval_SetTrace(Py_tracefunc func, PyObject *obj)
Set the tracing function to func. This is similar to PyEval_SetProfile(), except the tracing function does
receive line-number events and per-opcode events, but does not receive any event related to C function objects
being called. Any trace function registered using PyEval_SetTrace() will not receive PyTrace_C_CALL,
PyTrace_C_EXCEPTION or PyTrace_C_RETURN as a value for the what parameter.
The caller must hold the GIL.
The Python interpreter provides low-level support for thread-local storage (TLS) which wraps the underlying native TLS
implementation to support the Python-level thread local storage API (threading.local). The CPython C level APIs
are similar to those offered by pthreads and Windows: use a thread key and functions to associate a void* value per
thread.
The GIL does not need to be held when calling these functions; they supply their own locking.
Note that Python.h does not include the declaration of the TLS APIs, you need to include pythread.h to use
thread-local storage.
Note: None of these API functions handle memory management on behalf of the void* values. You need to allo-
cate and deallocate them yourself. If the void* values happen to be PyObject*, these functions don’t do refcount
TSS API is introduced to supersede the use of the existing TLS API within the CPython interpreter. This API uses a new
type Py_tss_t instead of int to represent thread keys.
New in version 3.7.
See also:
“A New C-API for Thread-Local Storage in CPython” (PEP 539)
Py_tss_t
This data structure represents the state of a thread key, the definition of which may depend on the underlying TLS
implementation, and it has an internal field representing the key’s initialization state. There are no public members
in this structure.
When Py_LIMITED_API is not defined, static allocation of this type by Py_tss_NEEDS_INIT is allowed.
Py_tss_NEEDS_INIT
This macro expands to the initializer for Py_tss_t variables. Note that this macro won’t be defined with
Py_LIMITED_API.
Dynamic Allocation
Dynamic allocation of the Py_tss_t, required in extension modules built with Py_LIMITED_API, where static alloca-
tion of this type is not possible due to its implementation being opaque at build time.
Py_tss_t* PyThread_tss_alloc()
Return a value which is the same state as a value initialized with Py_tss_NEEDS_INIT, or NULL in the case
of dynamic allocation failure.
void PyThread_tss_free(Py_tss_t *key)
Free the given key allocated by PyThread_tss_alloc(), after first calling PyThread_tss_delete()
to ensure any associated thread locals have been unassigned. This is a no-op if the key argument is NULL.
Note: A freed key becomes a dangling pointer, you should reset the key to NULL.
Methods
The parameter key of these functions must not be NULL. Moreover, the behaviors of PyThread_tss_set()
and PyThread_tss_get() are undefined if the given Py_tss_t has not been initialized by
PyThread_tss_create().
int PyThread_tss_is_created(Py_tss_t *key)
Return a non-zero value if the given Py_tss_t has been initialized by PyThread_tss_create().
int PyThread_tss_create(Py_tss_t *key)
Return a zero value on successful initialization of a TSS key. The behavior is undefined if the value pointed to
by the key argument is not initialized by Py_tss_NEEDS_INIT. This function can be called repeatedly on the
same key – calling it on an already initialized key is a no-op and immediately returns success.
Deprecated since version 3.7: This API is superseded by Thread Specific Storage (TSS) API.
Note: This version of the API does not support platforms where the native TLS key is defined in a way that cannot be
safely cast to int. On such platforms, PyThread_create_key() will return immediately with a failure status, and
the other TLS functions will all be no-ops on such platforms.
Due to the compatibility problem noted above, this version of the API should not be used in new code.
int PyThread_create_key()
void PyThread_delete_key(int key)
int PyThread_set_key_value(int key, void *value)
void* PyThread_get_key_value(int key)
void PyThread_delete_key_value(int key)
void PyThread_ReInitTLS()
TEN
163
The Python/C API, Release 3.9.6
• Py_ExitStatusException()
• Py_InitializeFromConfig()
• Py_PreInitialize()
• Py_PreInitializeFromArgs()
• Py_PreInitializeFromBytesArgs()
• Py_RunMain()
• Py_GetArgcArgv()
The preconfiguration (PyPreConfig type) is stored in _PyRuntime.preconfig and the configuration
(PyConfig type) is stored in PyInterpreterState.config.
See also Initialization, Finalization, and Threads.
See also:
PEP 587 “Python Initialization Configuration”.
10.1 PyWideStringList
PyWideStringList
List of wchar_t* strings.
If length is non-zero, items must be non-NULL and all strings must be non-NULL.
Methods:
PyStatus PyWideStringList_Append(PyWideStringList *list, const wchar_t *item)
Append item to list.
Python must be preinitialized to call this function.
PyStatus PyWideStringList_Insert(PyWideStringList *list, Py_ssize_t index, const wchar_t *item)
Insert item into list at index.
If index is greater than or equal to list length, append item to list.
index must be greater than or equal to 0.
Python must be preinitialized to call this function.
Structure fields:
Py_ssize_t length
List length.
wchar_t** items
List items.
10.2 PyStatus
PyStatus
Structure to store an initialization function status: success, error or exit.
For an error, it can store the C function name which created the error.
Structure fields:
int exitcode
Exit code. Argument passed to exit().
const char *err_msg
Error message.
const char *func
Name of the function which created an error, can be NULL.
Functions to create a status:
PyStatus PyStatus_Ok(void)
Success.
PyStatus PyStatus_Error(const char *err_msg)
Initialization error with a message.
PyStatus PyStatus_NoMemory(void)
Memory allocation failure (out of memory).
PyStatus PyStatus_Exit(int exitcode)
Exit Python with the specified exit code.
Functions to handle a status:
int PyStatus_Exception(PyStatus status)
Is the status an error or an exit? If true, the exception must be handled; by calling
Py_ExitStatusException() for example.
int PyStatus_IsError(PyStatus status)
Is the result an error?
int PyStatus_IsExit(PyStatus status)
Is the result an exit?
void Py_ExitStatusException(PyStatus status)
Call exit(exitcode) if status is an exit. Print the error message and exit with a non-zero exit code if
status is an error. Must only be called if PyStatus_Exception(status) is non-zero.
Note: Internally, Python uses macros which set PyStatus.func, whereas functions to create a status set func to
NULL.
Example:
PyStatus alloc(void **ptr, size_t size)
{
*ptr = PyMem_RawMalloc(size);
if (*ptr == NULL) {
return PyStatus_NoMemory();
}
return PyStatus_Ok();
(continues on next page)
10.3 PyPreConfig
PyPreConfig
Structure used to preinitialize Python:
• Set the Python memory allocator
• Configure the LC_CTYPE locale
• Set the UTF-8 mode
Function to initialize a preconfiguration:
void PyPreConfig_InitPythonConfig(PyPreConfig *preconfig)
Initialize the preconfiguration with Python Configuration.
void PyPreConfig_InitIsolatedConfig(PyPreConfig *preconfig)
Initialize the preconfiguration with Isolated Configuration.
Structure fields:
int allocator
Name of the memory allocator:
• PYMEM_ALLOCATOR_NOT_SET (0): don’t change memory allocators (use defaults)
• PYMEM_ALLOCATOR_DEFAULT (1): default memory allocators
• PYMEM_ALLOCATOR_DEBUG (2): default memory allocators with debug hooks
• PYMEM_ALLOCATOR_MALLOC (3): force usage of malloc()
• PYMEM_ALLOCATOR_MALLOC_DEBUG (4): force usage of malloc() with debug hooks
• PYMEM_ALLOCATOR_PYMALLOC (5): Python pymalloc memory allocator
• PYMEM_ALLOCATOR_PYMALLOC_DEBUG (6): Python pymalloc memory allocator with debug hooks
PYMEM_ALLOCATOR_PYMALLOC and PYMEM_ALLOCATOR_PYMALLOC_DEBUG are not supported if
Python is configured using --without-pymalloc
See Memory Management.
int configure_locale
Set the LC_CTYPE locale to the user preferred locale? If equals to 0, set coerce_c_locale and
coerce_c_locale_warn to 0.
int coerce_c_locale
If equals to 2, coerce the C locale; if equals to 1, read the LC_CTYPE locale to decide if it should be coerced.
int coerce_c_locale_warn
If non-zero, emit a warning if the C locale is coerced.
int dev_mode
See PyConfig.dev_mode.
int isolated
See PyConfig.isolated.
int legacy_windows_fs_encoding(Windows only)
If non-zero, disable UTF-8 Mode, set the Python filesystem encoding to mbcs, set the filesystem error handler
to replace.
Only available on Windows. #ifdef MS_WINDOWS macro can be used for Windows specific code.
int parse_argv
If non-zero, Py_PreInitializeFromArgs() and Py_PreInitializeFromBytesArgs()
parse their argv argument the same way the regular Python parses command line arguments: see Com-
mand Line Arguments.
int use_environment
See PyConfig.use_environment.
int utf8_mode
If non-zero, enable the UTF-8 mode.
PyStatus status;
PyPreConfig preconfig;
PyPreConfig_InitPythonConfig(&preconfig);
preconfig.utf8_mode = 1;
status = Py_PreInitialize(&preconfig);
if (PyStatus_Exception(status)) {
Py_ExitStatusException(status);
}
Py_Initialize();
/* ... use Python API here ... */
Py_Finalize();
10.5 PyConfig
PyConfig
Structure containing most parameters to configure Python.
Structure methods:
void PyConfig_InitPythonConfig(PyConfig *config)
Initialize configuration with Python Configuration.
void PyConfig_InitIsolatedConfig(PyConfig *config)
Initialize configuration with Isolated Configuration.
PyStatus PyConfig_SetString(PyConfig *config, wchar_t * const *config_str, const wchar_t *str)
Copy the wide character string str into *config_str.
Preinitialize Python if needed.
PyStatus PyConfig_SetBytesString(PyConfig *config, wchar_t * const *config_str, const char *str)
Decode str using Py_DecodeLocale() and set the result into *config_str.
Preinitialize Python if needed.
PyStatus PyConfig_SetArgv(PyConfig *config, int argc, wchar_t * const *argv)
Set command line arguments from wide character strings.
Preinitialize Python if needed.
PyStatus PyConfig_SetBytesArgv(PyConfig *config, int argc, char * const *argv)
Set command line arguments: decode bytes using Py_DecodeLocale().
Preinitialize Python if needed.
PyStatus PyConfig_SetWideStringList(PyConfig *config, PyWideStringList *list,
Py_ssize_t length, wchar_t **items)
Set the list of wide strings list to length and items.
Preinitialize Python if needed.
PyStatus PyConfig_Read(PyConfig *config)
Read all Python configuration.
Fields which are already initialized are left unchanged.
int configure_c_stdio
If non-zero, configure C standard streams (stdio, stdout, stdout). For example, set their mode to
O_BINARY on Windows.
int dev_mode
If non-zero, enable the Python Development Mode.
int dump_refs
If non-zero, dump all objects which are still alive at exit.
Py_TRACE_REFS macro must be defined in build.
wchar_t* exec_prefix
sys.exec_prefix.
wchar_t* executable
sys.executable.
int faulthandler
If non-zero, call faulthandler.enable() at startup.
wchar_t* filesystem_encoding
Filesystem encoding, sys.getfilesystemencoding().
wchar_t* filesystem_errors
Filesystem encoding errors, sys.getfilesystemencodeerrors().
unsigned long hash_seed
int use_hash_seed
Randomized hash function seed.
If use_hash_seed is zero, a seed is chosen randomly at Pythonstartup, and hash_seed is ignored.
wchar_t* home
Python home directory.
Initialized from PYTHONHOME environment variable value by default.
int import_time
If non-zero, profile import time.
int inspect
Enter interactive mode after executing a script or a command.
int install_signal_handlers
Install signal handlers?
int interactive
Interactive mode.
int isolated
If greater than 0, enable isolated mode:
• sys.path contains neither the script’s directory (computed from argv[0] or the current directory)
nor the user’s site-packages directory.
• Python REPL doesn’t import readline nor enable default readline configuration on interactive
prompts.
• Set use_environment and user_site_directory to 0.
int legacy_windows_stdio
If non-zero, use io.FileIO instead of io.WindowsConsoleIO for sys.stdin, sys.stdout and
sys.stderr.
Only available on Windows. #ifdef MS_WINDOWS macro can be used for Windows specific code.
int malloc_stats
If non-zero, dump statistics on Python pymalloc memory allocator at exit.
The option is ignored if Python is built using --without-pymalloc.
wchar_t* pythonpath_env
Module search paths as a string separated by DELIM (os.path.pathsep).
Initialized from PYTHONPATH environment variable value by default.
PyWideStringList module_search_paths
int module_search_paths_set
sys.path. If module_search_paths_set is equal to 0, the module_search_paths is over-
ridden by the function calculating the Path Configuration.
int optimization_level
Compilation optimization level:
• 0: Peephole optimizer (and __debug__ is set to True)
• 1: Remove assertions, set __debug__ to False
• 2: Strip docstrings
int parse_argv
If non-zero, parse argv the same way the regular Python command line arguments, and strip Python argu-
ments from argv: see Command Line Arguments.
int parser_debug
If non-zero, turn on parser debugging output (for expert only, depending on compilation options).
int pathconfig_warnings
If equal to 0, suppress warnings when calculating the Path Configuration (Unix only, Windows does not log
any warning). Otherwise, warnings are written into stderr.
wchar_t* prefix
sys.prefix.
wchar_t* program_name
Program name. Used to initialize executable, and in early error messages.
wchar_t* pycache_prefix
sys.pycache_prefix: .pyc cache prefix.
If NULL, sys.pycache_prefix is set to None.
int quiet
Quiet mode. For example, don’t display the copyright and version messages in interactive mode.
wchar_t* run_command
python3 -c COMMAND argument. Used by Py_RunMain().
wchar_t* run_filename
python3 FILENAME argument. Used by Py_RunMain().
wchar_t* run_module
python3 -m MODULE argument. Used by Py_RunMain().
int show_ref_count
Show total reference count at exit?
Set to 1 by -X showrefcount command line option.
void init_python(void)
{
PyStatus status;
PyConfig config;
PyConfig_InitPythonConfig(&config);
status = Py_InitializeFromConfig(&config);
if (PyStatus_Exception(status)) {
goto fail;
}
PyConfig_Clear(&config);
return;
fail:
PyConfig_Clear(&config);
Py_ExitStatusException(status);
}
More complete example modifying the default configuration, read the configuration, and then override some parameters:
PyConfig config;
PyConfig_InitPythonConfig(&config);
status = Py_InitializeFromConfig(&config);
done:
PyConfig_Clear(&config);
return status;
}
PyConfig config;
PyConfig_InitPythonConfig(&config);
config.isolated = 1;
status = Py_InitializeFromConfig(&config);
if (PyStatus_Exception(status)) {
goto fail;
}
PyConfig_Clear(&config);
return Py_RunMain();
fail:
PyConfig_Clear(&config);
if (PyStatus_IsExit(status)) {
return status.exitcode;
}
/* Display the error message and exit the process with
non-zero exit code */
Py_ExitStatusException(status);
}
10.10 Py_RunMain()
int Py_RunMain(void)
Execute the command (PyConfig.run_command), the script (PyConfig.run_filename) or the module
(PyConfig.run_module) specified on the command line or in the configuration.
By default and when if -i option is used, run the REPL.
Finally, finalizes Python and returns an exit status that can be passed to the exit() function.
See Python Configuration for an example of customized Python always running in isolated mode using Py_RunMain().
10.11 Py_GetArgcArgv()
This section is a private provisional API introducing multi-phase initialization, the core feature of the PEP 432:
• “Core” initialization phase, “bare minimum Python”:
– Builtin types;
– Builtin exceptions;
– Builtin and frozen modules;
– The sys module is only partially initialized (ex: sys.path doesn’t exist yet).
• “Main” initialization phase, Python is fully initialized:
– Install and configure importlib;
– Apply the Path Configuration;
– Install signal handlers;
– Finish sys module initialization (ex: create sys.stdout and sys.path);
– Enable optional features like faulthandler and tracemalloc;
– Import the site module;
– etc.
Private provisional API:
• PyConfig._init_main: if set to 0, Py_InitializeFromConfig() stops at the “Core” initialization
phase.
• PyConfig._isolated_interpreter: if non-zero, disallow threads, subprocesses and fork.
PyStatus _Py_InitializeMain(void)
Move to the “Main” initialization phase, finish the Python initialization.
No module is imported during the “Core” phase and the importlib module is not configured: the Path Configuration
is only applied during the “Main” phase. It may allow to customize Python in Python to override or tune the Path
Configuration, maybe install a custom sys.meta_path importer or an import hook, etc.
It may become possible to calculatin the Path Configuration in Python, after the Core phase and before the Main phase,
which is one of the PEP 432 motivation.
The “Core” phase is not properly defined: what should be and what should not be available at this phase is not specified
yet. The API is marked as private and provisional: the API can be modified or even be removed anytime until a proper
public API is designed.
Example running Python code between “Core” and “Main” initialization phases:
void init_python(void)
{
PyStatus status;
PyConfig config;
PyConfig_InitPythonConfig(&config);
config._init_main = 0;
status = Py_InitializeFromConfig(&config);
PyConfig_Clear(&config);
if (PyStatus_Exception(status)) {
Py_ExitStatusException(status);
}
status = _Py_InitializeMain();
if (PyStatus_Exception(status)) {
Py_ExitStatusException(status);
}
}
ELEVEN
MEMORY MANAGEMENT
11.1 Overview
Memory management in Python involves a private heap containing all Python objects and data structures. The manage-
ment of this private heap is ensured internally by the Python memory manager. The Python memory manager has different
components which deal with various dynamic storage management aspects, like sharing, segmentation, preallocation or
caching.
At the lowest level, a raw memory allocator ensures that there is enough room in the private heap for storing all Python-
related data by interacting with the memory manager of the operating system. On top of the raw memory allocator, several
object-specific allocators operate on the same heap and implement distinct memory management policies adapted to the
peculiarities of every object type. For example, integer objects are managed differently within the heap than strings, tuples
or dictionaries because integers imply different storage requirements and speed/space tradeoffs. The Python memory
manager thus delegates some of the work to the object-specific allocators, but ensures that the latter operate within the
bounds of the private heap.
It is important to understand that the management of the Python heap is performed by the interpreter itself and that the
user has no control over it, even if they regularly manipulate object pointers to memory blocks inside that heap. The
allocation of heap space for Python objects and other internal buffers is performed on demand by the Python memory
manager through the Python/C API functions listed in this document.
To avoid memory corruption, extension writers should never try to operate on Python objects with the functions exported
by the C library: malloc(), calloc(), realloc() and free(). This will result in mixed calls between the
C allocator and the Python memory manager with fatal consequences, because they implement different algorithms and
operate on different heaps. However, one may safely allocate and release memory blocks with the C library allocator for
individual purposes, as shown in the following example:
PyObject *res;
char *buf = (char *) malloc(BUFSIZ); /* for I/O */
if (buf == NULL)
return PyErr_NoMemory();
...Do some I/O operation involving buf...
res = PyBytes_FromString(buf);
free(buf); /* malloc'ed */
return res;
In this example, the memory request for the I/O buffer is handled by the C library allocator. The Python memory manager
is involved only in the allocation of the bytes object returned as a result.
In most situations, however, it is recommended to allocate memory from the Python heap specifically because the latter
is under control of the Python memory manager. For example, this is required when the interpreter is extended with new
object types written in C. Another reason for using the Python heap is the desire to inform the Python memory manager
about the memory needs of the extension module. Even when the requested memory is used exclusively for internal,
179
The Python/C API, Release 3.9.6
highly-specific purposes, delegating all memory requests to the Python memory manager causes the interpreter to have a
more accurate image of its memory footprint as a whole. Consequently, under certain circumstances, the Python memory
manager may or may not trigger appropriate actions, like garbage collection, memory compaction or other preventive
procedures. Note that by using the C library allocator as shown in the previous example, the allocated memory for the
I/O buffer escapes completely the Python memory manager.
See also:
The PYTHONMALLOC environment variable can be used to configure the memory allocators used by Python.
The PYTHONMALLOCSTATS environment variable can be used to print statistics of the pymalloc memory allocator every
time a new pymalloc object arena is created, and on shutdown.
The following function sets are wrappers to the system allocator. These functions are thread-safe, the GIL does not need
to be held.
The default raw memory allocator uses the following functions: malloc(), calloc(), realloc() and free();
call malloc(1) (or calloc(1, 1)) when requesting zero bytes.
New in version 3.4.
void* PyMem_RawMalloc(size_t n)
Allocates n bytes and returns a pointer of type void* to the allocated memory, or NULL if the request fails.
Requesting zero bytes returns a distinct non-NULL pointer if possible, as if PyMem_RawMalloc(1) had been
called instead. The memory will not have been initialized in any way.
void* PyMem_RawCalloc(size_t nelem, size_t elsize)
Allocates nelem elements each whose size in bytes is elsize and returns a pointer of type void* to the allocated
memory, or NULL if the request fails. The memory is initialized to zeros.
Requesting zero elements or elements of size zero bytes returns a distinct non-NULL pointer if possible, as if
PyMem_RawCalloc(1, 1) had been called instead.
New in version 3.5.
void* PyMem_RawRealloc(void *p, size_t n)
Resizes the memory block pointed to by p to n bytes. The contents will be unchanged to the minimum of the old
and the new sizes.
If p is NULL, the call is equivalent to PyMem_RawMalloc(n); else if n is equal to zero, the memory block is
resized but is not freed, and the returned pointer is non-NULL.
Unless p is NULL, it must have been returned by a previous call to PyMem_RawMalloc(),
PyMem_RawRealloc() or PyMem_RawCalloc().
If the request fails, PyMem_RawRealloc() returns NULL and p remains a valid pointer to the previous memory
area.
void PyMem_RawFree(void *p)
Frees the memory block pointed to by p, which must have been returned by a previous call to
PyMem_RawMalloc(), PyMem_RawRealloc() or PyMem_RawCalloc(). Otherwise, or if
PyMem_RawFree(p) has been called before, undefined behavior occurs.
If p is NULL, no operation is performed.
The following function sets, modeled after the ANSI C standard, but specifying behavior when requesting zero bytes, are
available for allocating and releasing memory from the Python heap.
The default memory allocator uses the pymalloc memory allocator.
Changed in version 3.6: The default allocator is now pymalloc instead of system malloc().
void* PyMem_Malloc(size_t n)
Allocates n bytes and returns a pointer of type void* to the allocated memory, or NULL if the request fails.
Requesting zero bytes returns a distinct non-NULL pointer if possible, as if PyMem_Malloc(1) had been called
instead. The memory will not have been initialized in any way.
void* PyMem_Calloc(size_t nelem, size_t elsize)
Allocates nelem elements each whose size in bytes is elsize and returns a pointer of type void* to the allocated
memory, or NULL if the request fails. The memory is initialized to zeros.
Requesting zero elements or elements of size zero bytes returns a distinct non-NULL pointer if possible, as if
PyMem_Calloc(1, 1) had been called instead.
New in version 3.5.
void* PyMem_Realloc(void *p, size_t n)
Resizes the memory block pointed to by p to n bytes. The contents will be unchanged to the minimum of the old
and the new sizes.
If p is NULL, the call is equivalent to PyMem_Malloc(n); else if n is equal to zero, the memory block is resized
but is not freed, and the returned pointer is non-NULL.
Unless p is NULL, it must have been returned by a previous call to PyMem_Malloc(), PyMem_Realloc()
or PyMem_Calloc().
If the request fails, PyMem_Realloc() returns NULL and p remains a valid pointer to the previous memory
area.
void PyMem_Free(void *p)
Frees the memory block pointed to by p, which must have been returned by a previous call to PyMem_Malloc(),
PyMem_Realloc() or PyMem_Calloc(). Otherwise, or if PyMem_Free(p) has been called before, un-
defined behavior occurs.
If p is NULL, no operation is performed.
The following type-oriented macros are provided for convenience. Note that TYPE refers to any C type.
TYPE* PyMem_New(TYPE, size_t n)
Same as PyMem_Malloc(), but allocates (n * sizeof(TYPE)) bytes of memory. Returns a pointer cast
to TYPE*. The memory will not have been initialized in any way.
TYPE* PyMem_Resize(void *p, TYPE, size_t n)
Same as PyMem_Realloc(), but the memory block is resized to (n * sizeof(TYPE)) bytes. Returns a
pointer cast to TYPE*. On return, p will be a pointer to the new memory area, or NULL in the event of failure.
This is a C preprocessor macro; p is always reassigned. Save the original value of p to avoid losing memory when
handling errors.
The following function sets, modeled after the ANSI C standard, but specifying behavior when requesting zero bytes, are
available for allocating and releasing memory from the Python heap.
The default object allocator uses the pymalloc memory allocator.
void* PyObject_Malloc(size_t n)
Allocates n bytes and returns a pointer of type void* to the allocated memory, or NULL if the request fails.
Requesting zero bytes returns a distinct non-NULL pointer if possible, as if PyObject_Malloc(1) had been
called instead. The memory will not have been initialized in any way.
void* PyObject_Calloc(size_t nelem, size_t elsize)
Allocates nelem elements each whose size in bytes is elsize and returns a pointer of type void* to the allocated
memory, or NULL if the request fails. The memory is initialized to zeros.
Requesting zero elements or elements of size zero bytes returns a distinct non-NULL pointer if possible, as if
PyObject_Calloc(1, 1) had been called instead.
New in version 3.5.
void* PyObject_Realloc(void *p, size_t n)
Resizes the memory block pointed to by p to n bytes. The contents will be unchanged to the minimum of the old
and the new sizes.
If p is NULL, the call is equivalent to PyObject_Malloc(n); else if n is equal to zero, the memory block is
resized but is not freed, and the returned pointer is non-NULL.
Unless p is NULL, it must have been returned by a previous call to PyObject_Malloc(),
PyObject_Realloc() or PyObject_Calloc().
If the request fails, PyObject_Realloc() returns NULL and p remains a valid pointer to the previous memory
area.
void PyObject_Free(void *p)
Frees the memory block pointed to by p, which must have been returned by a previous call to
Legend:
• Name: value for PYTHONMALLOC environment variable
• malloc: system allocators from the standard C library, C functions: malloc(), calloc(), realloc()
and free()
• pymalloc: pymalloc memory allocator
• “+ debug”: with debug hooks installed by PyMem_SetupDebugHooks()
Field Meaning
void *ctx user context passed as first argument
void* malloc(void *ctx, size_t size) allocate a memory block
void* calloc(void *ctx, size_t nelem, size_t allocate a memory block initialized with
elsize) zeros
void* realloc(void *ctx, void *ptr, size_t allocate or resize a memory block
new_size)
void free(void *ctx, void *ptr) free a memory block
Changed in version 3.5: The PyMemAllocator structure was renamed to PyMemAllocatorEx and a new
calloc field was added.
PyMemAllocatorDomain
Enum used to identify an allocator domain. Domains:
PYMEM_DOMAIN_RAW
Functions:
• PyMem_RawMalloc()
• PyMem_RawRealloc()
• PyMem_RawCalloc()
• PyMem_RawFree()
PYMEM_DOMAIN_MEM
Functions:
• PyMem_Malloc(),
• PyMem_Realloc()
• PyMem_Calloc()
• PyMem_Free()
PYMEM_DOMAIN_OBJ
Functions:
• PyObject_Malloc()
• PyObject_Realloc()
• PyObject_Calloc()
• PyObject_Free()
void PyMem_GetAllocator(PyMemAllocatorDomain domain, PyMemAllocatorEx *allocator)
Get the memory block allocator of the specified domain.
void PyMem_SetAllocator(PyMemAllocatorDomain domain, PyMemAllocatorEx *allocator)
Set the memory block allocator of the specified domain.
The new allocator must return a distinct non-NULL pointer when requesting zero bytes.
For the PYMEM_DOMAIN_RAW domain, the allocator must be thread-safe: the GIL is not held when the allocator
is called.
If the new allocator is not a hook (does not call the previous allocator), the PyMem_SetupDebugHooks()
function must be called to reinstall the debug hooks on top on the new allocator.
void PyMem_SetupDebugHooks(void)
Setup hooks to detect bugs in the Python memory allocator functions.
Newly allocated memory is filled with the byte 0xCD (CLEANBYTE), freed memory is filled with the byte 0xDD
(DEADBYTE). Memory blocks are surrounded by “forbidden bytes” (FORBIDDENBYTE: byte 0xFD).
Runtime checks:
• Detect API violations, ex: PyObject_Free() called on a buffer allocated by PyMem_Malloc()
• Detect write before the start of the buffer (buffer underflow)
• Detect write after the end of the buffer (buffer overflow)
• Check that the GIL is held when allocator functions of PYMEM_DOMAIN_OBJ (ex:
PyObject_Malloc()) and PYMEM_DOMAIN_MEM (ex: PyMem_Malloc()) domains are called
On error, the debug hooks use the tracemalloc module to get the traceback where a memory block was allo-
cated. The traceback is only displayed if tracemalloc is tracing Python memory allocations and the memory
block was traced.
These hooks are installed by default if Python is compiled in debug mode. The PYTHONMALLOC environment
variable can be used to install debug hooks on a Python compiled in release mode.
Changed in version 3.6: This function now also works on Python compiled in release mode. On error, the debug
hooks now use tracemalloc to get the traceback where a memory block was allocated. The debug hooks now
also check if the GIL is held when functions of PYMEM_DOMAIN_OBJ and PYMEM_DOMAIN_MEM domains are
called.
Changed in version 3.8: Byte patterns 0xCB (CLEANBYTE), 0xDB (DEADBYTE) and 0xFB
(FORBIDDENBYTE) have been replaced with 0xCD, 0xDD and 0xFD to use the same values than Win-
dows CRT debug malloc() and free().
Python has a pymalloc allocator optimized for small objects (smaller or equal to 512 bytes) with a short lifetime. It
uses memory mappings called “arenas” with a fixed size of 256 KiB. It falls back to PyMem_RawMalloc() and
PyMem_RawRealloc() for allocations larger than 512 bytes.
pymalloc is the default allocator of the PYMEM_DOMAIN_MEM (ex: PyMem_Malloc()) and PYMEM_DOMAIN_OBJ
(ex: PyObject_Malloc()) domains.
The arena allocator uses the following functions:
• VirtualAlloc() and VirtualFree() on Windows,
• mmap() and munmap() if available,
• malloc() and free() otherwise.
Field Meaning
void *ctx user context passed as first argument
void* alloc(void *ctx, size_t size) allocate an arena of size bytes
void free(void *ctx, void *ptr, size_t size) free an arena
11.9 Examples
Here is the example from section Overview, rewritten so that the I/O buffer is allocated from the Python heap by using
the first function set:
PyObject *res;
char *buf = (char *) PyMem_Malloc(BUFSIZ); /* for I/O */
if (buf == NULL)
return PyErr_NoMemory();
/* ...Do some I/O operation involving buf... */
res = PyBytes_FromString(buf);
PyMem_Free(buf); /* allocated with PyMem_Malloc */
return res;
PyObject *res;
char *buf = PyMem_New(char, BUFSIZ); /* for I/O */
if (buf == NULL)
return PyErr_NoMemory();
/* ...Do some I/O operation involving buf... */
res = PyBytes_FromString(buf);
PyMem_Del(buf); /* allocated with PyMem_New */
return res;
Note that in the two examples above, the buffer is always manipulated via functions belonging to the same set. Indeed, it
is required to use the same memory API family for a given memory block, so that the risk of mixing different allocators
is reduced to a minimum. The following code sequence contains two errors, one of which is labeled as fatal because it
mixes two different allocators operating on different heaps.
In addition to the functions aimed at handling raw memory blocks from the Python heap, objects in Python are allocated
and released with PyObject_New(), PyObject_NewVar() and PyObject_Del().
These will be explained in the next chapter on defining and implementing new object types in C.
TWELVE
This chapter describes the functions, types, and macros used when defining new object types.
189
The Python/C API, Release 3.9.6
There are a large number of structures which are used in the definition of object types for Python. This section describes
these structures and how they are used.
All Python objects ultimately share a small number of fields at the beginning of the object’s representation in memory.
These are represented by the PyObject and PyVarObject types, which are defined, in turn, by the expansions of
some macros also used, whether directly or indirectly, in the definition of all other Python objects.
PyObject
All object types are extensions of this type. This is a type which contains the information Python needs to treat
a pointer to an object as an object. In a normal “release” build, it contains only the object’s reference count and
a pointer to the corresponding type object. Nothing is actually declared to be a PyObject, but every pointer
to a Python object can be cast to a PyObject*. Access to the members must be done by using the macros
Py_REFCNT and Py_TYPE.
PyVarObject
This is an extension of PyObject that adds the ob_size field. This is only used for objects that have some
notion of length. This type does not often appear in the Python/C API. Access to the members must be done by
using the macros Py_REFCNT, Py_TYPE, and Py_SIZE.
PyObject_HEAD
This is a macro used when declaring new types which represent objects without a varying length. The PyOb-
ject_HEAD macro expands to:
PyObject ob_base;
PyVarObject ob_base;
(((PyObject*)(o))->ob_type)
(((PyObject*)(o))->ob_refcnt)
(((PyVarObject*)(o))->ob_size)
_PyObject_EXTRA_INIT
1, type,
PyVarObject_HEAD_INIT(type, size)
This is a macro which expands to initialization values for a new PyVarObject type, including the ob_size
field. This macro expands to:
_PyObject_EXTRA_INIT
1, type, size,
PyCFunction
Type of the functions used to implement most Python callables in C. Functions of this type take two PyObject*
parameters and return one such value. If the return value is NULL, an exception shall have been set. If not NULL,
the return value is interpreted as the return value of the function as exposed in Python. The function must return a
new reference.
The function signature is:
PyCFunctionWithKeywords
Type of the functions used to implement Python callables in C with signature METH_VARARGS |
METH_KEYWORDS. The function signature is:
_PyCFunctionFast
Type of the functions used to implement Python callables in C with signature METH_FASTCALL. The function
signature is:
_PyCFunctionFastWithKeywords
Type of the functions used to implement Python callables in C with signature METH_FASTCALL |
METH_KEYWORDS. The function signature is:
PyCMethod
Type of the functions used to implement Python callables in C with signature METH_METHOD |
METH_FASTCALL | METH_KEYWORDS. The function signature is:
The ml_meth is a C function pointer. The functions may be of different types, but they always return PyObject*.
If the function is not of the PyCFunction, the compiler will require a cast in the method table. Even though
PyCFunction defines the first parameter as PyObject*, it is common that the method implementation uses the
specific C type of the self object.
The ml_flags field is a bitfield which can include the following flags. The individual flags indicate either a calling
convention or a binding convention.
There are these calling conventions:
METH_VARARGS
This is the typical calling convention, where the methods have the type PyCFunction. The function expects two
PyObject* values. The first one is the self object for methods; for module functions, it is the module object.
The second parameter (often called args) is a tuple object representing all arguments. This parameter is typically
processed using PyArg_ParseTuple() or PyArg_UnpackTuple().
METH_VARARGS | METH_KEYWORDS
Methods with these flags must be of type PyCFunctionWithKeywords. The function expects three parame-
ters: self, args, kwargs where kwargs is a dictionary of all the keyword arguments or possibly NULL if there are no
keyword arguments. The parameters are typically processed using PyArg_ParseTupleAndKeywords().
METH_FASTCALL
Fast calling convention supporting only positional arguments. The methods have the type _PyCFunctionFast.
The first parameter is self, the second parameter is a C array of PyObject* values indicating the arguments and
the third parameter is the number of arguments (the length of the array).
This is not part of the limited API.
New in version 3.7.
METH_FASTCALL | METH_KEYWORDS
Extension of METH_FASTCALL supporting also keyword arguments, with methods of type
_PyCFunctionFastWithKeywords. Keyword arguments are passed the same way as in the vector-
call protocol: there is an additional fourth PyObject* parameter which is a tuple representing the names of the
keyword arguments (which are guaranteed to be strings) or possibly NULL if there are no keywords. The values
of the keyword arguments are stored in the args array, after the positional arguments.
This is not part of the limited API.
New in version 3.7.
METH_METHOD | METH_FASTCALL | METH_KEYWORDS
Extension of METH_FASTCALL | METH_KEYWORDS supporting the defining class, that is, the class that con-
tains the method in question. The defining class might be a superclass of Py_TYPE(self).
The method needs to be of type PyCMethod, the same as for METH_FASTCALL | METH_KEYWORDS with
defining_class argument added after self.
New in version 3.9.
METH_NOARGS
Methods without parameters don’t need to check whether arguments are given if they are listed with the
METH_NOARGS flag. They need to be of type PyCFunction. The first parameter is typically named self
and will hold a reference to the module or object instance. In all cases the second parameter will be NULL.
METH_O
Methods with a single object argument can be listed with the METH_O flag, instead of invoking
PyArg_ParseTuple() with a "O" argument. They have the type PyCFunction, with the self parame-
ter, and a PyObject* parameter representing the single argument.
These two constants are not used to indicate the calling convention but the binding when use with methods of classes.
These may not be used for functions defined for modules. At most one of these flags may be set for any given method.
METH_CLASS
The method will be passed the type object as the first parameter rather than an instance of the type. This is used
to create class methods, similar to what is created when using the classmethod() built-in function.
METH_STATIC
The method will be passed NULL as the first parameter rather than an instance of the type. This is used to create
static methods, similar to what is created when using the staticmethod() built-in function.
One other constant controls whether a method is loaded in place of another definition with the same method name.
METH_COEXIST
The method will be loaded in place of existing definitions. Without METH_COEXIST, the default is to skip re-
peated definitions. Since slot wrappers are loaded before the method table, the existence of a sq_contains slot,
for example, would generate a wrapped method named __contains__() and preclude the loading of a corre-
sponding PyCFunction with the same name. With the flag defined, the PyCFunction will be loaded in place of the
wrapper object and will co-exist with the slot. This is helpful because calls to PyCFunctions are optimized more
than wrapper object calls.
PyMemberDef
Structure which describes an attribute of a type which corresponds to a C struct member. Its fields are:
type can be one of many T_ macros corresponding to various C types. When the member is accessed in Python,
it will be converted to the equivalent Python type.
T_OBJECT and T_OBJECT_EX differ in that T_OBJECT returns None if the member is NULL and
T_OBJECT_EX raises an AttributeError. Try to use T_OBJECT_EX over T_OBJECT because
T_OBJECT_EX handles use of the del statement on that attribute more correctly than T_OBJECT.
flags can be 0 for write and read access or READONLY for read-only access. Using T_STRING for type
implies READONLY. T_STRING data is interpreted as UTF-8. Only T_OBJECT and T_OBJECT_EX members
can be deleted. (They are set to NULL).
Heap allocated types (created using PyType_FromSpec() or similar), PyMemberDef may contain definitions
for the special members __dictoffset__, __weaklistoffset__ and __vectorcalloffset__,
corresponding to tp_dictoffset, tp_weaklistoffset and tp_vectorcall_offset in type ob-
jects. These must be defined with T_PYSSIZET and READONLY, for example:
The get function takes one PyObject* parameter (the instance) and a function pointer (the associated
closure):
It should return a new reference on success or NULL with a set exception on failure.
set functions take two PyObject* parameters (the instance and the value to be set) and a function pointer (the
associated closure):
In case the attribute should be deleted the second parameter is NULL. Should return 0 on success or -1 with a set
exception on failure.
Perhaps one of the most important structures of the Python object system is the structure that defines a new type: the
PyTypeObject structure. Type objects can be handled using any of the PyObject_*() or PyType_*() functions,
but do not offer much that’s interesting to most Python applications. These objects are fundamental to how objects behave,
so they are very important to the interpreter itself and to any extension module that implements new types.
Type objects are fairly large compared to most of the standard types. The reason for the size is that each type object stores
a large number of values, mostly C function pointers, each of which implements a small part of the type’s functionality.
The fields of the type object are examined in detail in this section. The fields will be described in the order in which they
occur in the structure.
In addition to the following quick reference, the Examples section provides at-a-glance insight into the meaning and use
of PyTypeObject.
“tp slots”
sub-slots
1 A slot name in parentheses indicates it is (effectively) deprecated. Names in angle brackets should be treated as read-only. Names in square
brackets are for internal use only. “<R>” (as a prefix) means the field is required (must be non-NULL).
2 Columns:
bf_getbuffer getbufferproc()
bf_releasebuffer releasebufferproc()
slot typedefs
newfunc PyObject *
PyObject *
PyObject *
PyObject *
initproc int
PyObject *
PyObject *
PyObject *
setattrfunc int
PyObject *
const char *
PyObject *
getattrofunc PyObject *
PyObject *
PyObject *
setattrofunc int
PyObject *
PyObject *
PyObject *
descrgetfunc PyObject *
PyObject *
PyObject *
PyObject *
200 Chapter 12. Object Implementation Support
descrsetfunc int
PyObject *
PyObject *
The Python/C API, Release 3.9.6
The structure definition for PyTypeObject can be found in Include/object.h. For convenience of reference,
this repeats the definition found there:
typedef struct _typeobject {
PyObject_VAR_HEAD
const char *tp_name; /* For printing, in format "<module>.<name>" */
Py_ssize_t tp_basicsize, tp_itemsize; /* For allocation */
destructor tp_dealloc;
Py_ssize_t tp_vectorcall_offset;
getattrfunc tp_getattr;
setattrfunc tp_setattr;
PyAsyncMethods *tp_as_async; /* formerly known as tp_compare (Python 2)
or tp_reserved (Python 3) */
reprfunc tp_repr;
PyNumberMethods *tp_as_number;
PySequenceMethods *tp_as_sequence;
PyMappingMethods *tp_as_mapping;
hashfunc tp_hash;
ternaryfunc tp_call;
reprfunc tp_str;
getattrofunc tp_getattro;
setattrofunc tp_setattro;
/* rich comparisons */
richcmpfunc tp_richcompare;
/* Iterators */
(continues on next page)
destructor tp_finalize;
} PyTypeObject;
The type object structure extends the PyVarObject structure. The ob_size field is used for dynamic types (cre-
ated by type_new(), usually called from a class statement). Note that PyType_Type (the metatype) initializes
tp_itemsize, which means that its instances (i.e. type objects) must have the ob_size field.
PyObject* PyObject._ob_next
PyObject* PyObject._ob_prev
These fields are only present when the macro Py_TRACE_REFS is defined. Their initialization to NULL is taken
care of by the PyObject_HEAD_INIT macro. For statically allocated objects, these fields always remain NULL.
For dynamically allocated objects, these two fields are used to link the object into a doubly-linked list of all live
objects on the heap. This could be used for various debugging purposes; currently the only use is to print the objects
that are still alive at the end of a run when the environment variable PYTHONDUMPREFS is set.
Inheritance:
These fields are not inherited by subtypes.
Py_ssize_t PyObject.ob_refcnt
This is the type object’s reference count, initialized to 1 by the PyObject_HEAD_INIT macro. Note that for
statically allocated type objects, the type’s instances (objects whose ob_type points back to the type) do not count
as references. But for dynamically allocated type objects, the instances do count as references.
Inheritance:
This field is not inherited by subtypes.
PyTypeObject* PyObject.ob_type
This is the type’s type, in other words its metatype. It is initialized by the argument to the
PyObject_HEAD_INIT macro, and its value should normally be &PyType_Type. However, for dynami-
cally loadable extension modules that must be usable on Windows (at least), the compiler complains that this is
not a valid initializer. Therefore, the convention is to pass NULL to the PyObject_HEAD_INIT macro and to
initialize this field explicitly at the start of the module’s initialization function, before doing anything else. This is
typically done like this:
Foo_Type.ob_type = &PyType_Type;
This should be done before any instances of the type are created. PyType_Ready() checks if ob_type is
NULL, and if so, initializes it to the ob_type field of the base class. PyType_Ready() will not change this
field if it is non-zero.
Inheritance:
This field is inherited by subtypes.
Py_ssize_t PyVarObject.ob_size
For statically allocated type objects, this should be initialized to zero. For dynamically allocated type objects, this
field has a special internal meaning.
Inheritance:
This field is not inherited by subtypes.
Each slot has a section describing inheritance. If PyType_Ready() may set a value when the field is set to NULL
then there will also be a “Default” section. (Note that many fields set on PyBaseObject_Type and PyType_Type
effectively act as defaults.)
const char* PyTypeObject.tp_name
Pointer to a NUL-terminated string containing the name of the type. For types that are accessible as module globals,
the string should be the full module name, followed by a dot, followed by the type name; for built-in types, it should
be just the type name. If the module is a submodule of a package, the full package name is part of the full module
name. For example, a type named T defined in module M in subpackage Q in package P should have the tp_name
initializer "P.Q.M.T".
For dynamically allocated type objects, this should just be the type name, and the module name explicitly stored in
the type dict as the value for key '__module__'.
For statically allocated type objects, the tp_name field should contain a dot. Everything before the last dot is made
accessible as the __module__ attribute, and everything after the last dot is made accessible as the __name__
attribute.
If no dot is present, the entire tp_name field is made accessible as the __name__ attribute, and the
__module__ attribute is undefined (unless explicitly set in the dictionary, as explained above). This means
your type will be impossible to pickle. Additionally, it will not be listed in module documentations created with
pydoc.
This field must not be NULL. It is the only required field in PyTypeObject() (other than potentially
tp_itemsize).
Inheritance:
The destructor function is called by the Py_DECREF() and Py_XDECREF() macros when the new reference
count is zero. At this point, the instance is still in existence, but there are no references to it. The destructor
function should free all references which the instance owns, free all memory buffers owned by the instance (us-
ing the freeing function corresponding to the allocation function used to allocate the buffer), and call the type’s
tp_free function. If the type is not subtypable (doesn’t have the Py_TPFLAGS_BASETYPE flag bit set), it
is permissible to call the object deallocator directly instead of via tp_free. The object deallocator should be
the one used to allocate the instance; this is normally PyObject_Del() if the instance was allocated using
PyObject_New() or PyObject_VarNew(), or PyObject_GC_Del() if the instance was allocated us-
ing PyObject_GC_New() or PyObject_GC_NewVar().
Finally, if the type is heap allocated (Py_TPFLAGS_HEAPTYPE), the deallocator should decrement the reference
count for its type object after calling the type deallocator. In order to avoid dangling pointers, the recommended
way to achieve this is:
static void foo_dealloc(foo_object *self) {
PyTypeObject *tp = Py_TYPE(self);
(continues on next page)
Inheritance:
This field is inherited by subtypes.
Py_ssize_t PyTypeObject.tp_vectorcall_offset
An optional offset to a per-instance function that implements calling the object using the vectorcall protocol, a more
efficient alternative of the simpler tp_call.
This field is only used if the flag Py_TPFLAGS_HAVE_VECTORCALL is set. If so, this must be a positive integer
containing the offset in the instance of a vectorcallfunc pointer.
The vectorcallfunc pointer may be NULL, in which case the instance behaves as if
Py_TPFLAGS_HAVE_VECTORCALL was not set: calling the instance falls back to tp_call.
Any class that sets Py_TPFLAGS_HAVE_VECTORCALL must also set tp_call and make sure its behaviour is
consistent with the vectorcallfunc function. This can be done by setting tp_call to PyVectorcall_Call().
Warning: It is not recommended for heap types to implement the vectorcall protocol. When a user sets
__call__ in Python code, only tp_call is updated, likely making it inconsistent with the vectorcall function.
Note: The semantics of the tp_vectorcall_offset slot are provisional and expected to be finalized in
Python 3.9. If you use vectorcall, plan for updating your code for Python 3.9.
Changed in version 3.8: Before version 3.8, this slot was named tp_print. In Python 2.x, it was used for printing
to a file. In Python 3.0 to 3.7, it was unused.
Inheritance:
This field is always inherited. However, the Py_TPFLAGS_HAVE_VECTORCALL flag is not always inherited. If
it’s not, then the subclass won’t use vectorcall, except when PyVectorcall_Call() is explicitly called. This
is in particular the case for heap types (including subclasses defined in Python).
getattrfunc PyTypeObject.tp_getattr
An optional pointer to the get-attribute-string function.
This field is deprecated. When it is defined, it should point to a function that acts the same as the tp_getattro
function, but taking a C string instead of a Python string object to give the attribute name.
Inheritance:
Group: tp_getattr, tp_getattro
This field is inherited by subtypes together with tp_getattro: a subtype inherits both tp_getattr and
tp_getattro from its base type when the subtype’s tp_getattr and tp_getattro are both NULL.
setattrfunc PyTypeObject.tp_setattr
An optional pointer to the function for setting and deleting attributes.
This field is deprecated. When it is defined, it should point to a function that acts the same as the tp_setattro
function, but taking a C string instead of a Python string object to give the attribute name.
Inheritance:
The function must return a string or a Unicode object. Ideally, this function should return a string that, when passed
to eval(), given a suitable environment, returns an object with the same value. If this is not feasible, it should
return a string starting with '<' and ending with '>' from which both the type and the value of the object can be
deduced.
Inheritance:
This field is inherited by subtypes.
Default:
When this field is not set, a string of the form <%s object at %p> is returned, where %s is replaced by the
type name, and %p by the object’s memory address.
PyNumberMethods* PyTypeObject.tp_as_number
Pointer to an additional structure that contains fields relevant only to objects which implement the number protocol.
These fields are documented in Number Object Structures.
Inheritance:
The tp_as_number field is not inherited, but the contained fields are inherited individually.
PySequenceMethods* PyTypeObject.tp_as_sequence
Pointer to an additional structure that contains fields relevant only to objects which implement the sequence protocol.
These fields are documented in Sequence Object Structures.
Inheritance:
The tp_as_sequence field is not inherited, but the contained fields are inherited individually.
PyMappingMethods* PyTypeObject.tp_as_mapping
Pointer to an additional structure that contains fields relevant only to objects which implement the mapping protocol.
These fields are documented in Mapping Object Structures.
Inheritance:
The tp_as_mapping field is not inherited, but the contained fields are inherited individually.
hashfunc PyTypeObject.tp_hash
An optional pointer to a function that implements the built-in function hash().
The signature is the same as for PyObject_Hash():
The value -1 should not be returned as a normal return value; when an error occurs during the computation of the
hash value, the function should set an exception and return -1.
When this field is not set (and tp_richcompare is not set), an attempt to take the hash of the object raises
TypeError. This is the same as setting it to PyObject_HashNotImplemented().
This field can be set explicitly to PyObject_HashNotImplemented() to block inheritance of the hash
method from a parent type. This is interpreted as the equivalent of __hash__ = None at the Python level,
causing isinstance(o, collections.Hashable) to correctly return False. Note that the converse
is also true - setting __hash__ = None on a class at the Python level will result in the tp_hash slot being set
to PyObject_HashNotImplemented().
Inheritance:
Group: tp_hash, tp_richcompare
This field is inherited by subtypes together with tp_richcompare: a subtype inherits both of
tp_richcompare and tp_hash, when the subtype’s tp_richcompare and tp_hash are both NULL.
ternaryfunc PyTypeObject.tp_call
An optional pointer to a function that implements calling the object. This should be NULL if the object is not
callable. The signature is the same as for PyObject_Call():
Inheritance:
This field is inherited by subtypes.
reprfunc PyTypeObject.tp_str
An optional pointer to a function that implements the built-in operation str(). (Note that str is a type now,
and str() calls the constructor for that type. This constructor calls PyObject_Str() to do the actual work,
and PyObject_Str() will call this handler.)
The signature is the same as for PyObject_Str():
The function must return a string or a Unicode object. It should be a “friendly” string representation of the object,
as this is the representation that will be used, among other things, by the print() function.
Inheritance:
This field is inherited by subtypes.
Default:
When this field is not set, PyObject_Repr() is called to return a string representation.
getattrofunc PyTypeObject.tp_getattro
An optional pointer to the get-attribute function.
The signature is the same as for PyObject_GetAttr():
It is usually convenient to set this field to PyObject_GenericGetAttr(), which implements the normal way
of looking for object attributes.
Inheritance:
In addition, setting value to NULL to delete an attribute must be supported. It is usually convenient to set this field
to PyObject_GenericSetAttr(), which implements the normal way of setting object attributes.
Inheritance:
Group: tp_setattr, tp_setattro
This field is inherited by subtypes together with tp_setattr: a subtype inherits both tp_setattr and
tp_setattro from its base type when the subtype’s tp_setattr and tp_setattro are both NULL.
Default:
PyBaseObject_Type uses PyObject_GenericSetAttr().
PyBufferProcs* PyTypeObject.tp_as_buffer
Pointer to an additional structure that contains fields relevant only to objects which implement the buffer interface.
These fields are documented in Buffer Object Structures.
Inheritance:
The tp_as_buffer field is not inherited, but the contained fields are inherited individually.
unsigned long PyTypeObject.tp_flags
This field is a bit mask of various flags. Some flags indicate variant semantics for certain situations; others are used
to indicate that certain fields in the type object (or in the extension structures referenced via tp_as_number,
tp_as_sequence, tp_as_mapping, and tp_as_buffer) that were historically not always present are
valid; if such a flag bit is clear, the type fields it guards must not be accessed and must be considered to have a zero
or NULL value instead.
Inheritance:
Inheritance of this field is complicated. Most flag bits are inherited individually, i.e. if the base type has a flag
bit set, the subtype inherits this flag bit. The flag bits that pertain to extension structures are strictly inherited if
the extension structure is inherited, i.e. the base type’s value of the flag bit is copied into the subtype together
with a pointer to the extension structure. The Py_TPFLAGS_HAVE_GC flag bit is inherited together with the
tp_traverse and tp_clear fields, i.e. if the Py_TPFLAGS_HAVE_GC flag bit is clear in the subtype and
the tp_traverse and tp_clear fields in the subtype exist and have NULL values.
Default:
PyBaseObject_Type uses Py_TPFLAGS_DEFAULT | Py_TPFLAGS_BASETYPE.
Bit Masks:
The following bit masks are currently defined; these can be ORed together using the | operator to form the value of
the tp_flags field. The macro PyType_HasFeature() takes a type and a flags value, tp and f, and checks
whether tp->tp_flags & f is non-zero.
Py_TPFLAGS_HEAPTYPE
This bit is set when the type object itself is allocated on the heap, for example, types created dynamically
using PyType_FromSpec(). In this case, the ob_type field of its instances is considered a reference
to the type, and the type object is INCREF’ed when a new instance is created, and DECREF’ed when an
instance is destroyed (this does not apply to instances of subtypes; only the type referenced by the instance’s
ob_type gets INCREF’ed or DECREF’ed).
Inheritance:
???
Py_TPFLAGS_BASETYPE
This bit is set when the type can be used as the base type of another type. If this bit is clear, the type cannot
be subtyped (similar to a “final” class in Java).
Inheritance:
???
Py_TPFLAGS_READY
This bit is set when the type object has been fully initialized by PyType_Ready().
Inheritance:
???
Py_TPFLAGS_READYING
This bit is set while PyType_Ready() is in the process of initializing the type object.
Inheritance:
???
Py_TPFLAGS_HAVE_GC
This bit is set when the object supports garbage collection. If this bit is set, instances must be created us-
ing PyObject_GC_New() and destroyed using PyObject_GC_Del(). More information in section
Supporting Cyclic Garbage Collection. This bit also implies that the GC-related fields tp_traverse and
tp_clear are present in the type object.
Inheritance:
Group: Py_TPFLAGS_HAVE_GC, tp_traverse, tp_clear
The Py_TPFLAGS_HAVE_GC flag bit is inherited together with the tp_traverse and tp_clear fields,
i.e. if the Py_TPFLAGS_HAVE_GC flag bit is clear in the subtype and the tp_traverse and tp_clear
fields in the subtype exist and have NULL values.
Py_TPFLAGS_DEFAULT
This is a bitmask of all the bits that pertain to the existence of certain fields in the type object and its extension
structures. Currently, it includes the following bits: Py_TPFLAGS_HAVE_STACKLESS_EXTENSION,
Py_TPFLAGS_HAVE_VERSION_TAG.
Inheritance:
???
Py_TPFLAGS_METHOD_DESCRIPTOR
This bit indicates that objects behave like unbound methods.
If this flag is set for type(meth), then:
• meth.__get__(obj, cls)(*args, **kwds) (with obj not None) must be equivalent to
meth(obj, *args, **kwds).
More information about Python’s garbage collection scheme can be found in section Supporting Cyclic Garbage
Collection.
The tp_traverse pointer is used by the garbage collector to detect reference cycles. A typical implementation
of a tp_traverse function simply calls Py_VISIT() on each of the instance’s members that are Python
objects that the instance owns. For example, this is function local_traverse() from the _thread extension
module:
static int
local_traverse(localobject *self, visitproc visit, void *arg)
{
Py_VISIT(self->args);
Py_VISIT(self->kw);
Py_VISIT(self->dict);
return 0;
}
Note that Py_VISIT() is called only on those members that can participate in reference cycles. Although there
is also a self->key member, it can only be NULL or a Python string and therefore cannot be part of a reference
cycle.
On the other hand, even if you know a member can never be part of a cycle, as a debugging aid you may want to
visit it anyway just so the gc module’s get_referents() function will include it.
Warning: When implementing tp_traverse, only the members that the instance owns (by having strong
references to them) must be visited. For instance, if an object supports weak references via the tp_weaklist
slot, the pointer supporting the linked list (what tp_weaklist points to) must not be visited as the instance does
not directly own the weak references to itself (the weakreference list is there to support the weak reference
machinery, but the instance has no strong reference to the elements inside it, as they are allowed to be removed
even if the instance is still alive).
Note that Py_VISIT() requires the visit and arg parameters to local_traverse() to have these specific
names; don’t name them just anything.
Heap-allocated types (Py_TPFLAGS_HEAPTYPE, such as those created with PyType_FromSpec() and sim-
ilar APIs) hold a reference to their type. Their traversal function must therefore either visit Py_TYPE(self),
or delegate this responsibility by calling tp_traverse of another heap-allocated type (such as a heap-allocated
superclass). If they do not, the type object may not be garbage-collected.
Changed in version 3.9: Heap-allocated types are expected to visit Py_TYPE(self) in tp_traverse. In
earlier versions of Python, due to bug 40217, doing this may lead to crashes in subclasses.
Inheritance:
Group: Py_TPFLAGS_HAVE_GC, tp_traverse, tp_clear
This field is inherited by subtypes together with tp_clear and the Py_TPFLAGS_HAVE_GC flag bit: the flag
bit, tp_traverse, and tp_clear are all inherited from the base type if they are all zero in the subtype.
inquiry PyTypeObject.tp_clear
An optional pointer to a clear function for the garbage collector. This is only used if the Py_TPFLAGS_HAVE_GC
flag bit is set. The signature is:
The tp_clear member function is used to break reference cycles in cyclic garbage detected by the garbage
collector. Taken together, all tp_clear functions in the system must combine to break all reference cycles. This
is subtle, and if in any doubt supply a tp_clear function. For example, the tuple type does not implement a
tp_clear function, because it’s possible to prove that no reference cycle can be composed entirely of tuples.
Therefore the tp_clear functions of other types must be sufficient to break any cycle containing a tuple. This
isn’t immediately obvious, and there’s rarely a good reason to avoid implementing tp_clear.
Implementations of tp_clear should drop the instance’s references to those of its members that may be Python
objects, and set its pointers to those members to NULL, as in the following example:
static int
local_clear(localobject *self)
{
Py_CLEAR(self->key);
Py_CLEAR(self->args);
Py_CLEAR(self->kw);
Py_CLEAR(self->dict);
return 0;
}
The Py_CLEAR() macro should be used, because clearing references is delicate: the reference to the contained
object must not be decremented until after the pointer to the contained object is set to NULL. This is because
decrementing the reference count may cause the contained object to become trash, triggering a chain of reclamation
activity that may include invoking arbitrary Python code (due to finalizers, or weakref callbacks, associated with the
contained object). If it’s possible for such code to reference self again, it’s important that the pointer to the contained
object be NULL at that time, so that self knows the contained object can no longer be used. The Py_CLEAR()
macro performs the operations in a safe order.
Note that tp_clear is not always called before an instance is deallocated. For example, when reference count-
ing is enough to determine that an object is no longer used, the cyclic garbage collector is not involved and
tp_dealloc is called directly.
Because the goal of tp_clear functions is to break reference cycles, it’s not necessary to clear contained ob-
jects like Python strings or Python integers, which can’t participate in reference cycles. On the other hand, it
may be convenient to clear all contained Python objects, and write the type’s tp_dealloc function to invoke
tp_clear.
More information about Python’s garbage collection scheme can be found in section Supporting Cyclic Garbage
Collection.
Inheritance:
Group: Py_TPFLAGS_HAVE_GC, tp_traverse, tp_clear
This field is inherited by subtypes together with tp_traverse and the Py_TPFLAGS_HAVE_GC flag bit: the
flag bit, tp_traverse, and tp_clear are all inherited from the base type if they are all zero in the subtype.
richcmpfunc PyTypeObject.tp_richcompare
An optional pointer to the rich comparison function, whose signature is:
The first parameter is guaranteed to be an instance of the type that is defined by PyTypeObject.
The function should return the result of the comparison (usually Py_True or Py_False). If the comparison
is undefined, it must return Py_NotImplemented, if another error occurred it must return NULL and set an
exception condition.
The following constants are defined to be used as the third argument for tp_richcompare and for
PyObject_RichCompare():
Constant Comparison
Py_LT <
Py_LE <=
Py_EQ ==
Py_NE !=
Py_GT >
Py_GE >=
Inheritance:
This field is inherited by subtypes.
iternextfunc PyTypeObject.tp_iternext
An optional pointer to a function that returns the next item in an iterator. The signature is:
When the iterator is exhausted, it must return NULL; a StopIteration exception may or may not be set. When
another error occurs, it must return NULL too. Its presence signals that the instances of this type are iterators.
Iterator types should also define the tp_iter function, and that function should return the iterator instance itself
(not a new iterator instance).
This function has the same signature as PyIter_Next().
Inheritance:
This field is inherited by subtypes.
struct PyMethodDef* PyTypeObject.tp_methods
An optional pointer to a static NULL-terminated array of PyMethodDef structures, declaring regular methods
of this type.
For each entry in the array, an entry is added to the type’s dictionary (see tp_dict below) containing a method
descriptor.
Inheritance:
This field is not inherited by subtypes (methods are inherited through a different mechanism).
struct PyMemberDef* PyTypeObject.tp_members
An optional pointer to a static NULL-terminated array of PyMemberDef structures, declaring regular data mem-
bers (fields or slots) of instances of this type.
For each entry in the array, an entry is added to the type’s dictionary (see tp_dict below) containing a member
descriptor.
Inheritance:
This field is not inherited by subtypes (members are inherited through a different mechanism).
struct PyGetSetDef* PyTypeObject.tp_getset
An optional pointer to a static NULL-terminated array of PyGetSetDef structures, declaring computed attributes
of instances of this type.
For each entry in the array, an entry is added to the type’s dictionary (see tp_dict below) containing a getset
descriptor.
Inheritance:
This field is not inherited by subtypes (computed attributes are inherited through a different mechanism).
PyTypeObject* PyTypeObject.tp_base
An optional pointer to a base type from which type properties are inherited. At this level, only single inheritance
is supported; multiple inheritance require dynamically creating a type object by calling the metatype.
Note: Slot initialization is subject to the rules of initializing globals. C99 requires the initializers to be “address
constants”. Function designators like PyType_GenericNew(), with implicit conversion to a pointer, are valid
C99 address constants.
However, the unary ‘&’ operator applied to a non-static variable like PyBaseObject_Type() is not required to
produce an address constant. Compilers may support this (gcc does), MSVC does not. Both compilers are strictly
standard conforming in this particular behavior.
Consequently, tp_base should be set in the extension module’s init function.
Inheritance:
This field is not inherited by subtypes (obviously).
Default:
This field defaults to &PyBaseObject_Type (which to Python programmers is known as the type object).
PyObject* PyTypeObject.tp_dict
The type’s dictionary is stored here by PyType_Ready().
This field should normally be initialized to NULL before PyType_Ready is called; it may also be initialized to
a dictionary containing initial attributes for the type. Once PyType_Ready() has initialized the type, extra
attributes for the type may be added to this dictionary only if they don’t correspond to overloaded operations (like
__add__()).
Inheritance:
This field is not inherited by subtypes (though the attributes defined in here are inherited through a different mech-
anism).
Default:
If this field is NULL, PyType_Ready() will assign a new dictionary to it.
Warning: It is not safe to use PyDict_SetItem() on or otherwise modify tp_dict with the dictionary
C-API.
descrgetfunc PyTypeObject.tp_descr_get
An optional pointer to a “descriptor get” function.
The function signature is:
Inheritance:
This field is inherited by subtypes.
descrsetfunc PyTypeObject.tp_descr_set
An optional pointer to a function for setting and deleting a descriptor’s value.
The function signature is:
where tp_basicsize, tp_itemsize and tp_dictoffset are taken from the type object, and ob_size
is taken from the instance. The absolute value is taken because ints use the sign of ob_size to store
the sign of the number. (There’s never a need to do this calculation yourself; it is done for you by
_PyObject_GetDictPtr().)
Inheritance:
This field is inherited by subtypes, but see the rules listed below. A subtype may override this offset; this means that
the subtype instances store the dictionary at a difference offset than the base type. Since the dictionary is always
found via tp_dictoffset, this should not be a problem.
When a type defined by a class statement has no __slots__ declaration, and none of its base types has an instance
variable dictionary, a dictionary slot is added to the instance layout and the tp_dictoffset is set to that slot’s
offset.
When a type defined by a class statement has a __slots__ declaration, the type inherits its tp_dictoffset
from its base type.
(Adding a slot named __dict__ to the __slots__ declaration does not have the expected effect, it just causes
confusion. Maybe this should be added as a feature just like __weakref__ though.)
Default:
This slot has no default. For static types, if the field is NULL then no __dict__ gets created for instances.
initproc PyTypeObject.tp_init
An optional pointer to an instance initialization function.
This function corresponds to the __init__() method of classes. Like __init__(), it is possible to create an
instance without calling __init__(), and it is possible to reinitialize an instance by calling its __init__()
method again.
The function signature is:
The self argument is the instance to be initialized; the args and kwds arguments represent positional and keyword
arguments of the call to __init__().
The tp_init function, if not NULL, is called when an instance is created normally by calling its type, after the
type’s tp_new function has returned an instance of the type. If the tp_new function returns an instance of some
other type that is not a subtype of the original type, no tp_init function is called; if tp_new returns an instance
of a subtype of the original type, the subtype’s tp_init is called.
Returns 0 on success, -1 and sets an exception on error.
Inheritance:
This field is inherited by subtypes.
Default:
For static types this field does not have a default.
allocfunc PyTypeObject.tp_alloc
An optional pointer to an instance allocation function.
The function signature is:
Inheritance:
This field is inherited by static subtypes, but not by dynamic subtypes (subtypes created by a class statement).
Default:
For dynamic subtypes, this field is always set to PyType_GenericAlloc(), to force a standard heap allocation
strategy.
For static subtypes, PyBaseObject_Type uses PyType_GenericAlloc(). That is the recommended
value for all statically defined types.
newfunc PyTypeObject.tp_new
An optional pointer to an instance creation function.
The function signature is:
The subtype argument is the type of the object being created; the args and kwds arguments represent positional
and keyword arguments of the call to the type. Note that subtype doesn’t have to equal the type whose tp_new
function is called; it may be a subtype of that type (but not an unrelated type).
The tp_new function should call subtype->tp_alloc(subtype, nitems) to allocate space for the
object, and then do only as much further initialization as is absolutely necessary. Initialization that can safely be
ignored or repeated should be placed in the tp_init handler. A good rule of thumb is that for immutable types,
all initialization should take place in tp_new, while for mutable types, most initialization should be deferred to
tp_init.
Inheritance:
This field is inherited by subtypes, except it is not inherited by static types whose tp_base is NULL or
&PyBaseObject_Type.
Default:
For static types this field has no default. This means if the slot is defined as NULL, the type cannot be called to
create new instances; presumably there is some other way to create instances, like a factory function.
freefunc PyTypeObject.tp_free
An optional pointer to an instance deallocation function. Its signature is:
(The only example of this are types themselves. The metatype, PyType_Type, defines this function to distinguish
between statically and dynamically allocated types.)
Inheritance:
This field is inherited by subtypes.
Default:
This slot has no default. If this field is NULL, Py_TPFLAGS_HAVE_GC is used as the functional equivalent.
PyObject* PyTypeObject.tp_bases
Tuple of base types.
This is set for types created by a class statement. It should be NULL for statically defined types.
Inheritance:
This field is not inherited.
PyObject* PyTypeObject.tp_mro
Tuple containing the expanded set of base types, starting with the type itself and ending with object, in Method
Resolution Order.
Inheritance:
This field is not inherited; it is calculated fresh by PyType_Ready().
PyObject* PyTypeObject.tp_cache
Unused. Internal use only.
Inheritance:
This field is not inherited.
PyObject* PyTypeObject.tp_subclasses
List of weak references to subclasses. Internal use only.
Inheritance:
If tp_finalize is set, the interpreter calls it once when finalizing an instance. It is called either from the garbage
collector (if the instance is part of an isolated reference cycle) or just before the object is deallocated. Either way,
it is guaranteed to be called before attempting to break reference cycles, ensuring that it finds the object in a sane
state.
tp_finalize should not mutate the current exception status; therefore, a recommended way to write a non-
trivial finalizer is:
static void
local_finalize(PyObject *self)
{
PyObject *error_type, *error_value, *error_traceback;
/* ... */
For this field to be taken into account (even through inheritance), you must also set the
Py_TPFLAGS_HAVE_FINALIZE flags bit.
Inheritance:
This field is inherited by subtypes.
New in version 3.4.
See also:
“Safe object finalization” (PEP 442)
vectorcallfunc PyTypeObject.tp_vectorcall
Vectorcall function to use for calls of this type object. In other words, it is used to implement vectorcall for type.
__call__. If tp_vectorcall is NULL, the default call implementation using __new__ and __init__ is
used.
Inheritance:
This field is never inherited.
New in version 3.9: (the field exists since 3.8 but it’s only used since 3.9)
Also, note that, in a garbage collected Python, tp_dealloc may be called from any Python thread, not just the thread
which created the object (if the object becomes part of a refcount cycle, that cycle might be collected by a garbage
collection on any thread). This is not a problem for Python API calls, since the thread on which tp_dealloc is called will
own the Global Interpreter Lock (GIL). However, if the object being destroyed in turn destroys objects from some other
C or C++ library, care should be taken to ensure that destroying those objects on the thread which called tp_dealloc will
not violate any assumptions of the library.
Traditionally, types defined in C code are static, that is, a static PyTypeObject structure is defined directly in code and
initialized using PyType_Ready().
This results in types that are limited relative to types defined in Python:
• Static types are limited to one base, i.e. they cannot use multiple inheritance.
• Static type objects (but not necessarily their instances) are immutable. It is not possible to add or modify the type
object’s attributes from Python.
• Static type objects are shared across sub-interpreters, so they should not include any subinterpreter-specific state.
Also, since PyTypeObject is not part of the stable ABI, any extension modules using static types must be compiled
for a specific Python minor version.
An alternative to static types is heap-allocated types, or heap types for short, which correspond closely to classes created
by Python’s class statement.
This is done by filling a PyType_Spec structure and calling PyType_FromSpecWithBases().
typedef struct {
binaryfunc nb_add;
binaryfunc nb_subtract;
binaryfunc nb_multiply;
binaryfunc nb_remainder;
binaryfunc nb_divmod;
ternaryfunc nb_power;
unaryfunc nb_negative;
unaryfunc nb_positive;
unaryfunc nb_absolute;
inquiry nb_bool;
unaryfunc nb_invert;
binaryfunc nb_lshift;
binaryfunc nb_rshift;
(continues on next page)
binaryfunc nb_inplace_add;
binaryfunc nb_inplace_subtract;
binaryfunc nb_inplace_multiply;
binaryfunc nb_inplace_remainder;
ternaryfunc nb_inplace_power;
binaryfunc nb_inplace_lshift;
binaryfunc nb_inplace_rshift;
binaryfunc nb_inplace_and;
binaryfunc nb_inplace_xor;
binaryfunc nb_inplace_or;
binaryfunc nb_floor_divide;
binaryfunc nb_true_divide;
binaryfunc nb_inplace_floor_divide;
binaryfunc nb_inplace_true_divide;
unaryfunc nb_index;
binaryfunc nb_matrix_multiply;
binaryfunc nb_inplace_matrix_multiply;
} PyNumberMethods;
Note: Binary and ternary functions must check the type of all their operands, and implement the necessary
conversions (at least one of the operands is an instance of the defined type). If the operation is not defined for the
given operands, binary and ternary functions must return Py_NotImplemented, if another error occurred they
must return NULL and set an exception.
Note: The nb_reserved field should always be NULL. It was previously called nb_long, and was renamed
in Python 3.0.1.
binaryfunc PyNumberMethods.nb_add
binaryfunc PyNumberMethods.nb_subtract
binaryfunc PyNumberMethods.nb_multiply
binaryfunc PyNumberMethods.nb_remainder
binaryfunc PyNumberMethods.nb_divmod
ternaryfunc PyNumberMethods.nb_power
unaryfunc PyNumberMethods.nb_negative
unaryfunc PyNumberMethods.nb_positive
unaryfunc PyNumberMethods.nb_absolute
inquiry PyNumberMethods.nb_bool
unaryfunc PyNumberMethods.nb_invert
binaryfunc PyNumberMethods.nb_lshift
binaryfunc PyNumberMethods.nb_rshift
binaryfunc PyNumberMethods.nb_and
binaryfunc PyNumberMethods.nb_xor
binaryfunc PyNumberMethods.nb_or
unaryfunc PyNumberMethods.nb_int
void *PyNumberMethods.nb_reserved
unaryfunc PyNumberMethods.nb_float
binaryfunc PyNumberMethods.nb_inplace_add
binaryfunc PyNumberMethods.nb_inplace_subtract
binaryfunc PyNumberMethods.nb_inplace_multiply
binaryfunc PyNumberMethods.nb_inplace_remainder
ternaryfunc PyNumberMethods.nb_inplace_power
binaryfunc PyNumberMethods.nb_inplace_lshift
binaryfunc PyNumberMethods.nb_inplace_rshift
binaryfunc PyNumberMethods.nb_inplace_and
binaryfunc PyNumberMethods.nb_inplace_xor
binaryfunc PyNumberMethods.nb_inplace_or
binaryfunc PyNumberMethods.nb_floor_divide
binaryfunc PyNumberMethods.nb_true_divide
binaryfunc PyNumberMethods.nb_inplace_floor_divide
binaryfunc PyNumberMethods.nb_inplace_true_divide
unaryfunc PyNumberMethods.nb_index
binaryfunc PyNumberMethods.nb_matrix_multiply
binaryfunc PyNumberMethods.nb_inplace_matrix_multiply
objobjargproc PyMappingMethods.mp_ass_subscript
This function is used by PyObject_SetItem(), PyObject_DelItem(), PyObject_SetSlice()
and PyObject_DelSlice(). It has the same signature as PyObject_SetItem(), but v can also be set to
NULL to delete an item. If this slot is NULL, the object does not support item assignment and deletion.
Handle a request to exporter to fill in view as specified by flags. Except for point (3), an implementation of this
function MUST take these steps:
(1) Check if the request can be met. If not, raise PyExc_BufferError, set view->obj to NULL and
return -1.
(2) Fill in the requested fields.
(3) Increment an internal counter for the number of exports.
(4) Set view->obj to exporter and increment view->obj.
(5) Return 0.
If exporter is part of a chain or tree of buffer providers, two main schemes can be used:
• Re-export: Each member of the tree acts as the exporting object and sets view->obj to a new reference to
itself.
• Redirect: The buffer request is redirected to the root object of the tree. Here, view->obj will be a new
reference to the root object.
The individual fields of view are described in section Buffer structure, the rules how an exporter must react to specific
requests are in section Buffer request types.
All memory pointed to in the Py_buffer structure belongs to the exporter and must remain valid until there are
no consumers left. format, shape, strides, suboffsets and internal are read-only for the consumer.
PyBuffer_FillInfo() provides an easy way of exposing a simple bytes buffer while dealing correctly with
all request types.
PyObject_GetBuffer() is the interface for the consumer that wraps this function.
releasebufferproc PyBufferProcs.bf_releasebuffer
The signature of this function is:
Handle a request to release the resources of the buffer. If no resources need to be released, PyBufferProcs.
bf_releasebuffer may be NULL. Otherwise, a standard implementation of this function will take these
optional steps:
(1) Decrement an internal counter for the number of exports.
(2) If the counter is 0, free all memory associated with view.
The exporter MUST use the internal field to keep track of buffer-specific resources. This field is guaranteed
to remain constant, while a consumer MAY pass a copy of the original buffer as the view argument.
This function MUST NOT decrement view->obj, since that is done automatically in PyBuffer_Release()
(this scheme is useful for breaking reference cycles).
PyBuffer_Release() is the interface for the consumer that wraps this function.
typedef struct {
unaryfunc am_await;
unaryfunc am_aiter;
unaryfunc am_anext;
} PyAsyncMethods;
unaryfunc PyAsyncMethods.am_await
The signature of this function is:
The returned object must be an iterator, i.e. PyIter_Check() must return 1 for it.
This slot may be set to NULL if an object is not an awaitable.
unaryfunc PyAsyncMethods.am_aiter
The signature of this function is:
Must return an awaitable object. See __anext__() for details. This slot may be set to NULL.
void (*freefunc)(void *)
See tp_free.
PyObject *(*newfunc)(PyObject *, PyObject *, PyObject *)
See tp_new.
int (*initproc)(PyObject *, PyObject *, PyObject *)
See tp_init.
PyObject *(*reprfunc)(PyObject *)
See tp_repr.
PyObject *(*getattrfunc)(PyObject *self, char *attr)
Return the value of the named attribute for the object.
int (*setattrfunc)(PyObject *self, char *attr, PyObject *value)
Set the value of the named attribute for the object. The value argument is set to NULL to delete the attribute.
PyObject *(*getattrofunc)(PyObject *self, PyObject *attr)
Return the value of the named attribute for the object.
See tp_getattro.
int (*setattrofunc)(PyObject *self, PyObject *attr, PyObject *value)
Set the value of the named attribute for the object. The value argument is set to NULL to delete the attribute.
See tp_setattro.
PyObject *(*descrgetfunc)(PyObject *, PyObject *, PyObject *)
See tp_descrget.
int (*descrsetfunc)(PyObject *, PyObject *, PyObject *)
See tp_descrset.
Py_hash_t (*hashfunc)(PyObject *)
See tp_hash.
PyObject *(*richcmpfunc)(PyObject *, PyObject *, int)
See tp_richcompare.
PyObject *(*getiterfunc)(PyObject *)
See tp_iter.
PyObject *(*iternextfunc)(PyObject *)
See tp_iternext.
Py_ssize_t (*lenfunc)(PyObject *)
int (*getbufferproc)(PyObject *, Py_buffer *, int)
void (*releasebufferproc)(PyObject *, Py_buffer *)
PyObject *(*unaryfunc)(PyObject *)
PyObject *(*binaryfunc)(PyObject *, PyObject *)
PyObject *(*ternaryfunc)(PyObject *, PyObject *, PyObject *)
PyObject *(*ssizeargfunc)(PyObject *, Py_ssize_t)
int (*ssizeobjargproc)(PyObject *, Py_ssize_t)
int (*objobjproc)(PyObject *, PyObject *)
int (*objobjargproc)(PyObject *, PyObject *, PyObject *)
12.10 Examples
The following are simple examples of Python type definitions. They include common usage you may encounter. Some
demonstrate tricky corner cases. For more examples, practical info, and a tutorial, see defining-new-types and new-types-
topics.
A basic static type:
typedef struct {
PyObject_HEAD
const char *data;
} MyObject;
You may also find older code (especially in the CPython code base) with a more verbose initializer:
typedef struct {
PyObject_HEAD
const char *data;
PyObject *inst_dict;
PyObject *weakreflist;
} MyObject;
A str subclass that cannot be subclassed and cannot be called to create instances (e.g. uses a separate factory func):
typedef struct {
PyUnicodeObject raw;
char *extra;
} MyStr;
typedef struct {
PyObject_HEAD
} MyObject;
typedef struct {
PyObject_VAR_HEAD
const char *data[1];
} MyObject;
Python’s support for detecting and collecting garbage which involves circular references requires support from object
types which are “containers” for other objects which may also be containers. Types which do not store references to other
objects, or which only store references to atomic types (such as numbers or strings), do not need to provide any explicit
support for garbage collection.
To create a container type, the tp_flags field of the type object must include the Py_TPFLAGS_HAVE_GC and
provide an implementation of the tp_traverse handler. If instances of the type are mutable, a tp_clear imple-
mentation must also be provided.
Py_TPFLAGS_HAVE_GC
Objects with a type with this flag set must conform with the rules documented here. For convenience these objects
will be referred to as container objects.
Constructors for container types must conform to two rules:
1. The memory for the object must be allocated using PyObject_GC_New() or PyObject_GC_NewVar().
2. Once all the fields which may contain references to other containers are initialized, it must call
PyObject_GC_Track().
Warning: If a type adds the Py_TPFLAGS_HAVE_GC, then it must implement at least a tp_traverse
handler or explicitly use one from its subclass or subclasses.
When calling PyType_Ready() or some of the APIs that indirectly call it like
PyType_FromSpecWithBases() or PyType_FromSpec() the interpreter will automatically
populate the tp_flags, tp_traverse and tp_clear fields if the type inherits from a class that
implements the garbage collector protocol and the child class does not include the Py_TPFLAGS_HAVE_GC
flag.
static int
my_traverse(Noddy *self, visitproc visit, void *arg)
{
Py_VISIT(self->foo);
Py_VISIT(self->bar);
return 0;
}
The tp_clear handler must be of the inquiry type, or NULL if the object is immutable.
int (*inquiry)(PyObject *self)
Drop references that may have created reference cycles. Immutable objects do not have to define this method since
they can never directly create reference cycles. Note that the object must still be valid after calling this method
(don’t just call Py_DECREF() on a reference). The collector will call this method if it detects that this object is
involved in a reference cycle.
THIRTEEN
233
The Python/C API, Release 3.9.6
GLOSSARY
>>> The default Python prompt of the interactive shell. Often seen for code examples which can be executed interactively
in the interpreter.
... Can refer to:
• The default Python prompt of the interactive shell when entering the code for an indented code block, when
within a pair of matching left and right delimiters (parentheses, square brackets, curly braces or triple quotes),
or after specifying a decorator.
• The Ellipsis built-in constant.
2to3 A tool that tries to convert Python 2.x code to Python 3.x code by handling most of the incompatibilities which can
be detected by parsing the source and traversing the parse tree.
2to3 is available in the standard library as lib2to3; a standalone entry point is provided as Tools/scripts/
2to3. See 2to3-reference.
abstract base class Abstract base classes complement duck-typing by providing a way to define interfaces when other
techniques like hasattr() would be clumsy or subtly wrong (for example with magic methods). ABCs introduce
virtual subclasses, which are classes that don’t inherit from a class but are still recognized by isinstance() and
issubclass(); see the abc module documentation. Python comes with many built-in ABCs for data structures
(in the collections.abc module), numbers (in the numbers module), streams (in the io module), import
finders and loaders (in the importlib.abc module). You can create your own ABCs with the abc module.
annotation A label associated with a variable, a class attribute or a function parameter or return value, used by convention
as a type hint.
Annotations of local variables cannot be accessed at runtime, but annotations of global variables, class attributes,
and functions are stored in the __annotations__ special attribute of modules, classes, and functions, respec-
tively.
See variable annotation, function annotation, PEP 484 and PEP 526, which describe this functionality.
argument A value passed to a function (or method) when calling the function. There are two kinds of argument:
• keyword argument: an argument preceded by an identifier (e.g. name=) in a function call or passed as a value
in a dictionary preceded by **. For example, 3 and 5 are both keyword arguments in the following calls to
complex():
complex(real=3, imag=5)
complex(**{'real': 3, 'imag': 5})
• positional argument: an argument that is not a keyword argument. Positional arguments can appear at the
beginning of an argument list and/or be passed as elements of an iterable preceded by *. For example, 3 and
5 are both positional arguments in the following calls:
235
The Python/C API, Release 3.9.6
complex(3, 5)
complex(*(3, 5))
Arguments are assigned to the named local variables in a function body. See the calls section for the rules governing
this assignment. Syntactically, any expression can be used to represent an argument; the evaluated value is assigned
to the local variable.
See also the parameter glossary entry, the FAQ question on the difference between arguments and parameters, and
PEP 362.
asynchronous context manager An object which controls the environment seen in an async with statement by
defining __aenter__() and __aexit__() methods. Introduced by PEP 492.
asynchronous generator A function which returns an asynchronous generator iterator. It looks like a coroutine function
defined with async def except that it contains yield expressions for producing a series of values usable in an
async for loop.
Usually refers to an asynchronous generator function, but may refer to an asynchronous generator iterator in some
contexts. In cases where the intended meaning isn’t clear, using the full terms avoids ambiguity.
An asynchronous generator function may contain await expressions as well as async for, and async with
statements.
asynchronous generator iterator An object created by a asynchronous generator function.
This is an asynchronous iterator which when called using the __anext__() method returns an awaitable object
which will execute the body of the asynchronous generator function until the next yield expression.
Each yield temporarily suspends processing, remembering the location execution state (including local variables
and pending try-statements). When the asynchronous generator iterator effectively resumes with another awaitable
returned by __anext__(), it picks up where it left off. See PEP 492 and PEP 525.
asynchronous iterable An object, that can be used in an async for statement. Must return an asynchronous iterator
from its __aiter__() method. Introduced by PEP 492.
asynchronous iterator An object that implements the __aiter__() and __anext__() methods. __anext__
must return an awaitable object. async for resolves the awaitables returned by an asynchronous iterator’s
__anext__() method until it raises a StopAsyncIteration exception. Introduced by PEP 492.
attribute A value associated with an object which is referenced by name using dotted expressions. For example, if an
object o has an attribute a it would be referenced as o.a.
awaitable An object that can be used in an await expression. Can be a coroutine or an object with an __await__()
method. See also PEP 492.
BDFL Benevolent Dictator For Life, a.k.a. Guido van Rossum, Python’s creator.
binary file A file object able to read and write bytes-like objects. Examples of binary files are files opened in binary mode
('rb', 'wb' or 'rb+'), sys.stdin.buffer, sys.stdout.buffer, and instances of io.BytesIO
and gzip.GzipFile.
See also text file for a file object able to read and write str objects.
bytes-like object An object that supports the Buffer Protocol and can export a C-contiguous buffer. This includes all
bytes, bytearray, and array.array objects, as well as many common memoryview objects. Bytes-like
objects can be used for various operations that work with binary data; these include compression, saving to a binary
file, and sending over a socket.
Some operations need the binary data to be mutable. The documentation often refers to these as “read-write bytes-
like objects”. Example mutable buffer objects include bytearray and a memoryview of a bytearray. Other
operations require the binary data to be stored in immutable objects (“read-only bytes-like objects”); examples of
these include bytes and a memoryview of a bytes object.
bytecode Python source code is compiled into bytecode, the internal representation of a Python program in the CPython
interpreter. The bytecode is also cached in .pyc files so that executing the same file is faster the second time
(recompilation from source to bytecode can be avoided). This “intermediate language” is said to run on a virtual
machine that executes the machine code corresponding to each bytecode. Do note that bytecodes are not expected
to work between different Python virtual machines, nor to be stable between Python releases.
A list of bytecode instructions can be found in the documentation for the dis module.
callback A subroutine function which is passed as an argument to be executed at some point in the future.
class A template for creating user-defined objects. Class definitions normally contain method definitions which operate
on instances of the class.
class variable A variable defined in a class and intended to be modified only at class level (i.e., not in an instance of the
class).
coercion The implicit conversion of an instance of one type to another during an operation which involves two arguments
of the same type. For example, int(3.15) converts the floating point number to the integer 3, but in 3+4.5,
each argument is of a different type (one int, one float), and both must be converted to the same type before they
can be added or it will raise a TypeError. Without coercion, all arguments of even compatible types would have
to be normalized to the same value by the programmer, e.g., float(3)+4.5 rather than just 3+4.5.
complex number An extension of the familiar real number system in which all numbers are expressed as a sum of a real
part and an imaginary part. Imaginary numbers are real multiples of the imaginary unit (the square root of -1),
often written i in mathematics or j in engineering. Python has built-in support for complex numbers, which are
written with this latter notation; the imaginary part is written with a j suffix, e.g., 3+1j. To get access to complex
equivalents of the math module, use cmath. Use of complex numbers is a fairly advanced mathematical feature.
If you’re not aware of a need for them, it’s almost certain you can safely ignore them.
context manager An object which controls the environment seen in a with statement by defining __enter__() and
__exit__() methods. See PEP 343.
context variable A variable which can have different values depending on its context. This is similar to Thread-Local
Storage in which each execution thread may have a different value for a variable. However, with context variables,
there may be several contexts in one execution thread and the main usage for context variables is to keep track of
variables in concurrent asynchronous tasks. See contextvars.
contiguous A buffer is considered contiguous exactly if it is either C-contiguous or Fortran contiguous. Zero-dimensional
buffers are C and Fortran contiguous. In one-dimensional arrays, the items must be laid out in memory next to each
other, in order of increasing indexes starting from zero. In multidimensional C-contiguous arrays, the last index
varies the fastest when visiting items in order of memory address. However, in Fortran contiguous arrays, the first
index varies the fastest.
coroutine Coroutines are a more generalized form of subroutines. Subroutines are entered at one point and exited at
another point. Coroutines can be entered, exited, and resumed at many different points. They can be implemented
with the async def statement. See also PEP 492.
coroutine function A function which returns a coroutine object. A coroutine function may be defined with the async
def statement, and may contain await, async for, and async with keywords. These were introduced by
PEP 492.
CPython The canonical implementation of the Python programming language, as distributed on python.org. The term
“CPython” is used when necessary to distinguish this implementation from others such as Jython or IronPython.
decorator A function returning another function, usually applied as a function transformation using the @wrapper
syntax. Common examples for decorators are classmethod() and staticmethod().
The decorator syntax is merely syntactic sugar, the following two function definitions are semantically equivalent:
237
The Python/C API, Release 3.9.6
def f(...):
...
f = staticmethod(f)
@staticmethod
def f(...):
...
The same concept exists for classes, but is less commonly used there. See the documentation for function definitions
and class definitions for more about decorators.
descriptor Any object which defines the methods __get__(), __set__(), or __delete__(). When a class
attribute is a descriptor, its special binding behavior is triggered upon attribute lookup. Normally, using a.b to
get, set or delete an attribute looks up the object named b in the class dictionary for a, but if b is a descriptor, the
respective descriptor method gets called. Understanding descriptors is a key to a deep understanding of Python
because they are the basis for many features including functions, methods, properties, class methods, static methods,
and reference to super classes.
For more information about descriptors’ methods, see descriptors or the Descriptor How To Guide.
dictionary An associative array, where arbitrary keys are mapped to values. The keys can be any object with
__hash__() and __eq__() methods. Called a hash in Perl.
dictionary comprehension A compact way to process all or part of the elements in an iterable and return a dictionary
with the results. results = {n: n ** 2 for n in range(10)} generates a dictionary containing
key n mapped to value n ** 2. See comprehensions.
dictionary view The objects returned from dict.keys(), dict.values(), and dict.items() are called
dictionary views. They provide a dynamic view on the dictionary’s entries, which means that when the dictionary
changes, the view reflects these changes. To force the dictionary view to become a full list use list(dictview).
See dict-views.
docstring A string literal which appears as the first expression in a class, function or module. While ignored when
the suite is executed, it is recognized by the compiler and put into the __doc__ attribute of the enclosing class,
function or module. Since it is available via introspection, it is the canonical place for documentation of the object.
duck-typing A programming style which does not look at an object’s type to determine if it has the right interface;
instead, the method or attribute is simply called or used (“If it looks like a duck and quacks like a duck, it must
be a duck.”) By emphasizing interfaces rather than specific types, well-designed code improves its flexibility by
allowing polymorphic substitution. Duck-typing avoids tests using type() or isinstance(). (Note, however,
that duck-typing can be complemented with abstract base classes.) Instead, it typically employs hasattr() tests
or EAFP programming.
EAFP Easier to ask for forgiveness than permission. This common Python coding style assumes the existence of valid
keys or attributes and catches exceptions if the assumption proves false. This clean and fast style is characterized
by the presence of many try and except statements. The technique contrasts with the LBYL style common to
many other languages such as C.
expression A piece of syntax which can be evaluated to some value. In other words, an expression is an accumulation
of expression elements like literals, names, attribute access, operators or function calls which all return a value.
In contrast to many other languages, not all language constructs are expressions. There are also statements which
cannot be used as expressions, such as while. Assignments are also statements, not expressions.
extension module A module written in C or C++, using Python’s C API to interact with the core and with user code.
f-string String literals prefixed with 'f' or 'F' are commonly called “f-strings” which is short for formatted string
literals. See also PEP 498.
file object An object exposing a file-oriented API (with methods such as read() or write()) to an underlying re-
source. Depending on the way it was created, a file object can mediate access to a real on-disk file or to another
type of storage or communication device (for example standard input/output, in-memory buffers, sockets, pipes,
etc.). File objects are also called file-like objects or streams.
There are actually three categories of file objects: raw binary files, buffered binary files and text files. Their interfaces
are defined in the io module. The canonical way to create a file object is by using the open() function.
file-like object A synonym for file object.
finder An object that tries to find the loader for a module that is being imported.
Since Python 3.3, there are two types of finder: meta path finders for use with sys.meta_path, and path entry
finders for use with sys.path_hooks.
See PEP 302, PEP 420 and PEP 451 for much more detail.
floor division Mathematical division that rounds down to nearest integer. The floor division operator is //. For example,
the expression 11 // 4 evaluates to 2 in contrast to the 2.75 returned by float true division. Note that (-11)
// 4 is -3 because that is -2.75 rounded downward. See PEP 238.
function A series of statements which returns some value to a caller. It can also be passed zero or more arguments which
may be used in the execution of the body. See also parameter, method, and the function section.
function annotation An annotation of a function parameter or return value.
Function annotations are usually used for type hints: for example, this function is expected to take two int argu-
ments and is also expected to have an int return value:
garbage collection The process of freeing memory when it is not used anymore. Python performs garbage collection
via reference counting and a cyclic garbage collector that is able to detect and break reference cycles. The garbage
collector can be controlled using the gc module.
generator A function which returns a generator iterator. It looks like a normal function except that it contains yield
expressions for producing a series of values usable in a for-loop or that can be retrieved one at a time with the
next() function.
Usually refers to a generator function, but may refer to a generator iterator in some contexts. In cases where the
intended meaning isn’t clear, using the full terms avoids ambiguity.
generator iterator An object created by a generator function.
Each yield temporarily suspends processing, remembering the location execution state (including local variables
and pending try-statements). When the generator iterator resumes, it picks up where it left off (in contrast to
functions which start fresh on every invocation).
239
The Python/C API, Release 3.9.6
generator expression An expression that returns an iterator. It looks like a normal expression followed by a for clause
defining a loop variable, range, and an optional if clause. The combined expression generates values for an en-
closing function:
generic function A function composed of multiple functions implementing the same operation for different types. Which
implementation should be used during a call is determined by the dispatch algorithm.
See also the single dispatch glossary entry, the functools.singledispatch() decorator, and PEP 443.
generic type A type that can be parameterized; typically a container like list. Used for type hints and annotations.
See PEP 483 for more details, and typing or generic alias type for its uses.
GIL See global interpreter lock.
global interpreter lock The mechanism used by the CPython interpreter to assure that only one thread executes Python
bytecode at a time. This simplifies the CPython implementation by making the object model (including critical
built-in types such as dict) implicitly safe against concurrent access. Locking the entire interpreter makes it
easier for the interpreter to be multi-threaded, at the expense of much of the parallelism afforded by multi-processor
machines.
However, some extension modules, either standard or third-party, are designed so as to release the GIL when doing
computationally-intensive tasks such as compression or hashing. Also, the GIL is always released when doing I/O.
Past efforts to create a “free-threaded” interpreter (one which locks shared data at a much finer granularity) have not
been successful because performance suffered in the common single-processor case. It is believed that overcoming
this performance issue would make the implementation much more complicated and therefore costlier to maintain.
hash-based pyc A bytecode cache file that uses the hash rather than the last-modified time of the corresponding source
file to determine its validity. See pyc-invalidation.
hashable An object is hashable if it has a hash value which never changes during its lifetime (it needs a __hash__()
method), and can be compared to other objects (it needs an __eq__() method). Hashable objects which compare
equal must have the same hash value.
Hashability makes an object usable as a dictionary key and a set member, because these data structures use the hash
value internally.
Most of Python’s immutable built-in objects are hashable; mutable containers (such as lists or dictionaries) are not;
immutable containers (such as tuples and frozensets) are only hashable if their elements are hashable. Objects which
are instances of user-defined classes are hashable by default. They all compare unequal (except with themselves),
and their hash value is derived from their id().
IDLE An Integrated Development Environment for Python. IDLE is a basic editor and interpreter environment which
ships with the standard distribution of Python.
immutable An object with a fixed value. Immutable objects include numbers, strings and tuples. Such an object cannot
be altered. A new object has to be created if a different value has to be stored. They play an important role in
places where a constant hash value is needed, for example as a key in a dictionary.
import path A list of locations (or path entries) that are searched by the path based finder for modules to import. During
import, this list of locations usually comes from sys.path, but for subpackages it may also come from the parent
package’s __path__ attribute.
importing The process by which Python code in one module is made available to Python code in another module.
importer An object that both finds and loads a module; both a finder and loader object.
interactive Python has an interactive interpreter which means you can enter statements and expressions at the interpreter
prompt, immediately execute them and see their results. Just launch python with no arguments (possibly by
selecting it from your computer’s main menu). It is a very powerful way to test out new ideas or inspect modules
and packages (remember help(x)).
interpreted Python is an interpreted language, as opposed to a compiled one, though the distinction can be blurry because
of the presence of the bytecode compiler. This means that source files can be run directly without explicitly creating
an executable which is then run. Interpreted languages typically have a shorter development/debug cycle than
compiled ones, though their programs generally also run more slowly. See also interactive.
interpreter shutdown When asked to shut down, the Python interpreter enters a special phase where it gradually releases
all allocated resources, such as modules and various critical internal structures. It also makes several calls to the
garbage collector. This can trigger the execution of code in user-defined destructors or weakref callbacks. Code
executed during the shutdown phase can encounter various exceptions as the resources it relies on may not function
anymore (common examples are library modules or the warnings machinery).
The main reason for interpreter shutdown is that the __main__ module or the script being run has finished
executing.
iterable An object capable of returning its members one at a time. Examples of iterables include all sequence types (such
as list, str, and tuple) and some non-sequence types like dict, file objects, and objects of any classes you
define with an __iter__() method or with a __getitem__() method that implements Sequence semantics.
Iterables can be used in a for loop and in many other places where a sequence is needed (zip(), map(), …).
When an iterable object is passed as an argument to the built-in function iter(), it returns an iterator for the
object. This iterator is good for one pass over the set of values. When using iterables, it is usually not necessary to
call iter() or deal with iterator objects yourself. The for statement does that automatically for you, creating
a temporary unnamed variable to hold the iterator for the duration of the loop. See also iterator, sequence, and
generator.
iterator An object representing a stream of data. Repeated calls to the iterator’s __next__() method (or passing
it to the built-in function next()) return successive items in the stream. When no more data are available a
StopIteration exception is raised instead. At this point, the iterator object is exhausted and any further calls
to its __next__() method just raise StopIteration again. Iterators are required to have an __iter__()
method that returns the iterator object itself so every iterator is also iterable and may be used in most places where
other iterables are accepted. One notable exception is code which attempts multiple iteration passes. A container
object (such as a list) produces a fresh new iterator each time you pass it to the iter() function or use it in a
for loop. Attempting this with an iterator will just return the same exhausted iterator object used in the previous
iteration pass, making it appear like an empty container.
More information can be found in typeiter.
key function A key function or collation function is a callable that returns a value used for sorting or ordering. For
example, locale.strxfrm() is used to produce a sort key that is aware of locale specific sort conventions.
A number of tools in Python accept key functions to control how elements are ordered or grouped. They in-
clude min(), max(), sorted(), list.sort(), heapq.merge(), heapq.nsmallest(), heapq.
nlargest(), and itertools.groupby().
There are several ways to create a key function. For example. the str.lower() method can serve as a key
function for case insensitive sorts. Alternatively, a key function can be built from a lambda expression such
as lambda r: (r[0], r[2]). Also, the operator module provides three key function constructors:
attrgetter(), itemgetter(), and methodcaller(). See the Sorting HOW TO for examples of how
to create and use key functions.
keyword argument See argument.
lambda An anonymous inline function consisting of a single expression which is evaluated when the function is called.
The syntax to create a lambda function is lambda [parameters]: expression
LBYL Look before you leap. This coding style explicitly tests for pre-conditions before making calls or lookups. This
style contrasts with the EAFP approach and is characterized by the presence of many if statements.
241
The Python/C API, Release 3.9.6
In a multi-threaded environment, the LBYL approach can risk introducing a race condition between “the looking”
and “the leaping”. For example, the code, if key in mapping: return mapping[key] can fail if
another thread removes key from mapping after the test, but before the lookup. This issue can be solved with locks
or by using the EAFP approach.
list A built-in Python sequence. Despite its name it is more akin to an array in other languages than to a linked list since
access to elements is O(1).
list comprehension A compact way to process all or part of the elements in a sequence and return a list with the results.
result = ['{:#04x}'.format(x) for x in range(256) if x % 2 == 0] generates a list
of strings containing even hex numbers (0x..) in the range from 0 to 255. The if clause is optional. If omitted,
all elements in range(256) are processed.
loader An object that loads a module. It must define a method named load_module(). A loader is typically returned
by a finder. See PEP 302 for details and importlib.abc.Loader for an abstract base class.
magic method An informal synonym for special method.
mapping A container object that supports arbitrary key lookups and implements the methods specified in the Mapping
or MutableMapping abstract base classes. Examples include dict, collections.defaultdict,
collections.OrderedDict and collections.Counter.
meta path finder A finder returned by a search of sys.meta_path. Meta path finders are related to, but different
from path entry finders.
See importlib.abc.MetaPathFinder for the methods that meta path finders implement.
metaclass The class of a class. Class definitions create a class name, a class dictionary, and a list of base classes. The
metaclass is responsible for taking those three arguments and creating the class. Most object oriented program-
ming languages provide a default implementation. What makes Python special is that it is possible to create custom
metaclasses. Most users never need this tool, but when the need arises, metaclasses can provide powerful, ele-
gant solutions. They have been used for logging attribute access, adding thread-safety, tracking object creation,
implementing singletons, and many other tasks.
More information can be found in metaclasses.
method A function which is defined inside a class body. If called as an attribute of an instance of that class, the method
will get the instance object as its first argument (which is usually called self). See function and nested scope.
method resolution order Method Resolution Order is the order in which base classes are searched for a member during
lookup. See The Python 2.3 Method Resolution Order for details of the algorithm used by the Python interpreter
since the 2.3 release.
module An object that serves as an organizational unit of Python code. Modules have a namespace containing arbitrary
Python objects. Modules are loaded into Python by the process of importing.
See also package.
module spec A namespace containing the import-related information used to load a module. An instance of
importlib.machinery.ModuleSpec.
MRO See method resolution order.
mutable Mutable objects can change their value but keep their id(). See also immutable.
named tuple The term “named tuple” applies to any type or class that inherits from tuple and whose indexable elements
are also accessible using named attributes. The type or class may have other features as well.
Several built-in types are named tuples, including the values returned by time.localtime() and os.
stat(). Another example is sys.float_info:
Some named tuples are built-in types (such as the above examples). Alternatively, a named tuple can be created
from a regular class definition that inherits from tuple and that defines named fields. Such a class can be written
by hand or it can be created with the factory function collections.namedtuple(). The latter technique
also adds some extra methods that may not be found in hand-written or built-in named tuples.
namespace The place where a variable is stored. Namespaces are implemented as dictionaries. There are the local, global
and built-in namespaces as well as nested namespaces in objects (in methods). Namespaces support modularity by
preventing naming conflicts. For instance, the functions builtins.open and os.open() are distinguished
by their namespaces. Namespaces also aid readability and maintainability by making it clear which module im-
plements a function. For instance, writing random.seed() or itertools.islice() makes it clear that
those functions are implemented by the random and itertools modules, respectively.
namespace package A PEP 420 package which serves only as a container for subpackages. Namespace packages may
have no physical representation, and specifically are not like a regular package because they have no __init__.
py file.
See also module.
nested scope The ability to refer to a variable in an enclosing definition. For instance, a function defined inside another
function can refer to variables in the outer function. Note that nested scopes by default work only for reference and
not for assignment. Local variables both read and write in the innermost scope. Likewise, global variables read and
write to the global namespace. The nonlocal allows writing to outer scopes.
new-style class Old name for the flavor of classes now used for all class objects. In earlier Python versions,
only new-style classes could use Python’s newer, versatile features like __slots__, descriptors, properties,
__getattribute__(), class methods, and static methods.
object Any data with state (attributes or value) and defined behavior (methods). Also the ultimate base class of any
new-style class.
package A Python module which can contain submodules or recursively, subpackages. Technically, a package is a Python
module with an __path__ attribute.
See also regular package and namespace package.
parameter A named entity in a function (or method) definition that specifies an argument (or in some cases, arguments)
that the function can accept. There are five kinds of parameter:
• positional-or-keyword: specifies an argument that can be passed either positionally or as a keyword argument.
This is the default kind of parameter, for example foo and bar in the following:
• positional-only: specifies an argument that can be supplied only by position. Positional-only parameters can
be defined by including a / character in the parameter list of the function definition after them, for example
posonly1 and posonly2 in the following:
• keyword-only: specifies an argument that can be supplied only by keyword. Keyword-only parameters can be
defined by including a single var-positional parameter or bare * in the parameter list of the function definition
before them, for example kw_only1 and kw_only2 in the following:
243
The Python/C API, Release 3.9.6
• var-positional: specifies that an arbitrary sequence of positional arguments can be provided (in addition to any
positional arguments already accepted by other parameters). Such a parameter can be defined by prepending
the parameter name with *, for example args in the following:
• var-keyword: specifies that arbitrarily many keyword arguments can be provided (in addition to any key-
word arguments already accepted by other parameters). Such a parameter can be defined by prepending the
parameter name with **, for example kwargs in the example above.
Parameters can specify both optional and required arguments, as well as default values for some optional arguments.
See also the argument glossary entry, the FAQ question on the difference between arguments and parameters, the
inspect.Parameter class, the function section, and PEP 362.
path entry A single location on the import path which the path based finder consults to find modules for importing.
path entry finder A finder returned by a callable on sys.path_hooks (i.e. a path entry hook) which knows how to
locate modules given a path entry.
See importlib.abc.PathEntryFinder for the methods that path entry finders implement.
path entry hook A callable on the sys.path_hook list which returns a path entry finder if it knows how to find
modules on a specific path entry.
path based finder One of the default meta path finders which searches an import path for modules.
path-like object An object representing a file system path. A path-like object is either a str or bytes object represent-
ing a path, or an object implementing the os.PathLike protocol. An object that supports the os.PathLike
protocol can be converted to a str or bytes file system path by calling the os.fspath() function; os.
fsdecode() and os.fsencode() can be used to guarantee a str or bytes result instead, respectively.
Introduced by PEP 519.
PEP Python Enhancement Proposal. A PEP is a design document providing information to the Python community,
or describing a new feature for Python or its processes or environment. PEPs should provide a concise technical
specification and a rationale for proposed features.
PEPs are intended to be the primary mechanisms for proposing major new features, for collecting community input
on an issue, and for documenting the design decisions that have gone into Python. The PEP author is responsible
for building consensus within the community and documenting dissenting opinions.
See PEP 1.
portion A set of files in a single directory (possibly stored in a zip file) that contribute to a namespace package, as defined
in PEP 420.
positional argument See argument.
provisional API A provisional API is one which has been deliberately excluded from the standard library’s backwards
compatibility guarantees. While major changes to such interfaces are not expected, as long as they are marked
provisional, backwards incompatible changes (up to and including removal of the interface) may occur if deemed
necessary by core developers. Such changes will not be made gratuitously – they will occur only if serious funda-
mental flaws are uncovered that were missed prior to the inclusion of the API.
Even for provisional APIs, backwards incompatible changes are seen as a “solution of last resort” - every attempt
will still be made to find a backwards compatible resolution to any identified problems.
This process allows the standard library to continue to evolve over time, without locking in problematic design
errors for extended periods of time. See PEP 411 for more details.
for i in range(len(food)):
print(food[i])
qualified name A dotted name showing the “path” from a module’s global scope to a class, function or method defined
in that module, as defined in PEP 3155. For top-level functions and classes, the qualified name is the same as the
object’s name:
>>> class C:
... class D:
... def meth(self):
... pass
...
>>> C.__qualname__
'C'
>>> C.D.__qualname__
'C.D'
>>> C.D.meth.__qualname__
'C.D.meth'
When used to refer to modules, the fully qualified name means the entire dotted path to the module, including any
parent packages, e.g. email.mime.text:
reference count The number of references to an object. When the reference count of an object drops to zero, it is
deallocated. Reference counting is generally not visible to Python code, but it is a key element of the CPython
implementation. The sys module defines a getrefcount() function that programmers can call to return the
reference count for a particular object.
regular package A traditional package, such as a directory containing an __init__.py file.
See also namespace package.
__slots__ A declaration inside a class that saves memory by pre-declaring space for instance attributes and eliminating
instance dictionaries. Though popular, the technique is somewhat tricky to get right and is best reserved for rare
cases where there are large numbers of instances in a memory-critical application.
sequence An iterable which supports efficient element access using integer indices via the __getitem__() special
method and defines a __len__() method that returns the length of the sequence. Some built-in sequence types
are list, str, tuple, and bytes. Note that dict also supports __getitem__() and __len__(), but is
considered a mapping rather than a sequence because the lookups use arbitrary immutable keys rather than integers.
245
The Python/C API, Release 3.9.6
The collections.abc.Sequence abstract base class defines a much richer interface that goes be-
yond just __getitem__() and __len__(), adding count(), index(), __contains__(), and
__reversed__(). Types that implement this expanded interface can be registered explicitly using
register().
set comprehension A compact way to process all or part of the elements in an iterable and return a set with the
results. results = {c for c in 'abracadabra' if c not in 'abc'} generates the set of
strings {'r', 'd'}. See comprehensions.
single dispatch A form of generic function dispatch where the implementation is chosen based on the type of a single
argument.
slice An object usually containing a portion of a sequence. A slice is created using the subscript notation, [] with
colons between numbers when several are given, such as in variable_name[1:3:5]. The bracket (subscript)
notation uses slice objects internally.
special method A method that is called implicitly by Python to execute a certain operation on a type, such as addi-
tion. Such methods have names starting and ending with double underscores. Special methods are documented in
specialnames.
statement A statement is part of a suite (a “block” of code). A statement is either an expression or one of several
constructs with a keyword, such as if, while or for.
text encoding A codec which encodes Unicode strings to bytes.
text file A file object able to read and write str objects. Often, a text file actually accesses a byte-oriented datastream
and handles the text encoding automatically. Examples of text files are files opened in text mode ('r' or 'w'),
sys.stdin, sys.stdout, and instances of io.StringIO.
See also binary file for a file object able to read and write bytes-like objects.
triple-quoted string A string which is bound by three instances of either a quotation mark (“) or an apostrophe (‘).
While they don’t provide any functionality not available with single-quoted strings, they are useful for a number of
reasons. They allow you to include unescaped single and double quotes within a string and they can span multiple
lines without the use of the continuation character, making them especially useful when writing docstrings.
type The type of a Python object determines what kind of object it is; every object has a type. An object’s type is
accessible as its __class__ attribute or can be retrieved with type(obj).
type alias A synonym for a type, created by assigning the type to an identifier.
Type aliases are useful for simplifying type hints. For example:
def remove_gray_shades(
colors: list[tuple[int, int, int]]) -> list[tuple[int, int, int]]:
pass
Type hints of global variables, class attributes, and functions, but not local variables, can be accessed using
typing.get_type_hints().
See typing and PEP 484, which describe this functionality.
universal newlines A manner of interpreting text streams in which all of the following are recognized as ending a line:
the Unix end-of-line convention '\n', the Windows convention '\r\n', and the old Macintosh convention '\
r'. See PEP 278 and PEP 3116, as well as bytes.splitlines() for an additional use.
variable annotation An annotation of a variable or a class attribute.
When annotating a variable or a class attribute, assignment is optional:
class C:
field: 'annotation'
Variable annotations are usually used for type hints: for example this variable is expected to take int values:
count: int = 0
247
The Python/C API, Release 3.9.6
These documents are generated from reStructuredText sources by Sphinx, a document processor specifically written for
the Python documentation.
Development of the documentation and its toolchain is an entirely volunteer effort, just like Python itself. If you want
to contribute, please take a look at the reporting-bugs page for information on how to do so. New volunteers are always
welcome!
Many thanks go to:
• Fred L. Drake, Jr., the creator of the original Python documentation toolset and writer of much of the content;
• the Docutils project for creating reStructuredText and the Docutils suite;
• Fredrik Lundh for his Alternative Python Reference project from which Sphinx got many good ideas.
Many people have contributed to the Python language, the Python standard library, and the Python documentation. See
Misc/ACKS in the Python source distribution for a partial list of contributors.
It is only with the input and contributions of the Python community that Python has such wonderful documentation –
Thank You!
249
The Python/C API, Release 3.9.6
Python was created in the early 1990s by Guido van Rossum at Stichting Mathematisch Centrum (CWI, see https://www.
cwi.nl/) in the Netherlands as a successor of a language called ABC. Guido remains Python’s principal author, although
it includes many contributions from others.
In 1995, Guido continued his work on Python at the Corporation for National Research Initiatives (CNRI, see https:
//www.cnri.reston.va.us/) in Reston, Virginia where he released several versions of the software.
In May 2000, Guido and the Python core development team moved to BeOpen.com to form the BeOpen PythonLabs
team. In October of the same year, the PythonLabs team moved to Digital Creations (now Zope Corporation; see https:
//www.zope.org/). In 2001, the Python Software Foundation (PSF, see https://www.python.org/psf/) was formed, a non-
profit organization created specifically to own Python-related Intellectual Property. Zope Corporation is a sponsoring
member of the PSF.
All Python releases are Open Source (see https://opensource.org/ for the Open Source Definition). Historically, most,
but not all, Python releases have also been GPL-compatible; the table below summarizes the various releases.
Note: GPL-compatible doesn’t mean that we’re distributing Python under the GPL. All Python licenses, unlike the GPL,
let you distribute a modified version without making your changes open source. The GPL-compatible licenses make it
possible to combine Python with other software that is released under the GPL; the others don’t.
Thanks to the many outside volunteers who have worked under Guido’s direction to make these releases possible.
251
The Python/C API, Release 3.9.6
Python software and documentation are licensed under the PSF License Agreement.
Starting with Python 3.8.6, examples, recipes, and other code in the documentation are dual licensed under the PSF
License Agreement and the Zero-Clause BSD license.
Some software incorporated into Python is under different licenses. The licenses are listed with code falling under that
license. See Licenses and Acknowledgements for Incorporated Software for an incomplete list of these licenses.
2. Subject to the terms and conditions of this License Agreement, PSF hereby
grants Licensee a nonexclusive, royalty-free, world-wide license to␣
,→reproduce,
agrees to include in any such work a brief summary of the changes made to␣
,→Python
3.9.6.
USE OF PYTHON 3.9.6 WILL NOT INFRINGE ANY THIRD PARTY RIGHTS.
5. PSF SHALL NOT BE LIABLE TO LICENSEE OR ANY OTHER USERS OF PYTHON 3.9.6
FOR ANY INCIDENTAL, SPECIAL, OR CONSEQUENTIAL DAMAGES OR LOSS AS A RESULT␣
,→OF
Agreement does not grant permission to use PSF trademarks or trade name in␣
,→a
third party.
2. Subject to the terms and conditions of this BeOpen Python License Agreement,
BeOpen hereby grants Licensee a non-exclusive, royalty-free, world-wide license
to reproduce, analyze, test, perform and/or display publicly, prepare derivative
works, distribute, and otherwise use the Software alone or in any derivative
version, provided, however, that the BeOpen Python License is retained in the
Software, alone or in any derivative version prepared by Licensee.
4. BEOPEN SHALL NOT BE LIABLE TO LICENSEE OR ANY OTHER USERS OF THE SOFTWARE FOR
ANY INCIDENTAL, SPECIAL, OR CONSEQUENTIAL DAMAGES OR LOSS AS A RESULT OF USING,
MODIFYING OR DISTRIBUTING THE SOFTWARE, OR ANY DERIVATIVE THEREOF, EVEN IF
ADVISED OF THE POSSIBILITY THEREOF.
C.2. Terms and conditions for accessing or otherwise using Python 253
The Python/C API, Release 3.9.6
2. Subject to the terms and conditions of this License Agreement, CNRI hereby
grants Licensee a nonexclusive, royalty-free, world-wide license to reproduce,
analyze, test, perform and/or display publicly, prepare derivative works,
distribute, and otherwise use Python 1.6.1 alone or in any derivative version,
provided, however, that CNRI's License Agreement and CNRI's notice of copyright,
i.e., "Copyright © 1995-2001 Corporation for National Research Initiatives; All
Rights Reserved" are retained in Python 1.6.1 alone or in any derivative version
prepared by Licensee. Alternately, in lieu of CNRI's License Agreement,
Licensee may substitute the following text (omitting the quotes): "Python 1.6.1
is made available subject to the terms and conditions in CNRI's License
Agreement. This Agreement together with Python 1.6.1 may be located on the
Internet using the following unique, persistent identifier (known as a handle):
1895.22/1013. This Agreement may also be obtained from a proxy server on the
Internet using the following URL: http://hdl.handle.net/1895.22/1013."
4. CNRI is making Python 1.6.1 available to Licensee on an "AS IS" basis. CNRI
MAKES NO REPRESENTATIONS OR WARRANTIES, EXPRESS OR IMPLIED. BY WAY OF EXAMPLE,
BUT NOT LIMITATION, CNRI MAKES NO AND DISCLAIMS ANY REPRESENTATION OR WARRANTY
OF MERCHANTABILITY OR FITNESS FOR ANY PARTICULAR PURPOSE OR THAT THE USE OF
PYTHON 1.6.1 WILL NOT INFRINGE ANY THIRD PARTY RIGHTS.
5. CNRI SHALL NOT BE LIABLE TO LICENSEE OR ANY OTHER USERS OF PYTHON 1.6.1 FOR
ANY INCIDENTAL, SPECIAL, OR CONSEQUENTIAL DAMAGES OR LOSS AS A RESULT OF
MODIFYING, DISTRIBUTING, OR OTHERWISE USING PYTHON 1.6.1, OR ANY DERIVATIVE
THEREOF, EVEN IF ADVISED OF THE POSSIBILITY THEREOF.
Permission to use, copy, modify, and distribute this software and its
documentation for any purpose and without fee is hereby granted, provided that
the above copyright notice appear in all copies and that both that copyright
notice and this permission notice appear in supporting documentation, and that
the name of Stichting Mathematisch Centrum or CWI not be used in advertising or
publicity pertaining to distribution of the software without specific, written
prior permission.
C.2.5 ZERO-CLAUSE BSD LICENSE FOR CODE IN THE PYTHON 3.9.6 DOCUMEN-
TATION
Permission to use, copy, modify, and/or distribute this software for any
purpose with or without fee is hereby granted.
THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES WITH
REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY
AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY SPECIAL, DIRECT,
INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM
LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR
OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR
PERFORMANCE OF THIS SOFTWARE.
C.2. Terms and conditions for accessing or otherwise using Python 255
The Python/C API, Release 3.9.6
This section is an incomplete, but growing list of licenses and acknowledgements for third-party software incorporated in
the Python distribution.
C.3.2 Sockets
The socket module uses the functions, getaddrinfo(), and getnameinfo(), which are coded in separate
source files from the WIDE Project, http://www.wide.ad.jp/.
THIS SOFTWARE IS PROVIDED BY THE PROJECT AND CONTRIBUTORS ``AS IS'' AND
ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
ARE DISCLAIMED. IN NO EVENT SHALL THE PROJECT OR CONTRIBUTORS BE LIABLE
FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
SUCH DAMAGE.
Permission to use, copy, modify, and distribute this Python software and
its associated documentation for any purpose without fee is hereby
granted, provided that the above copyright notice appears in all copies,
and that both that copyright notice and this permission notice appear in
supporting documentation, and that the name of neither Automatrix,
Bioreason or Mojam Media be used in advertising or publicity pertaining to
distribution of the software without specific, written prior permission.
SECRET LABS AB AND THE AUTHOR DISCLAIMS ALL WARRANTIES WITH REGARD
TO THIS SOFTWARE, INCLUDING ALL IMPLIED WARRANTIES OF MERCHANT-
ABILITY AND FITNESS. IN NO EVENT SHALL SECRET LABS AB OR THE AUTHOR
BE LIABLE FOR ANY SPECIAL, INDIRECT OR CONSEQUENTIAL DAMAGES OR ANY
DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS,
WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS
(continues on next page)
C.3.8 test_epoll
The select module contains the following notice for the kqueue interface:
Copyright (c) 2000 Doug White, 2006 James Knight, 2007 Christian Heimes
All rights reserved.
THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND
ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE
FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
(continues on next page)
C.3.10 SipHash24
The file Python/pyhash.c contains Marek Majkowski’ implementation of Dan Bernstein’s SipHash24 algorithm. It
contains the following note:
<MIT License>
Copyright (c) 2013 Marek Majkowski <[email protected]>
The above copyright notice and this permission notice shall be included in
all copies or substantial portions of the Software.
</MIT License>
Original location:
https://github.com/majek/csiphash/
The file Python/dtoa.c, which supplies C functions dtoa and strtod for conversion of C doubles to and from strings,
is derived from the file of the same name by David M. Gay, currently available from http://www.netlib.org/fp/. The
original file, as retrieved on March 16, 2009, contains the following copyright and licensing notice:
/****************************************************************
*
* The author of this software is David M. Gay.
*
* Copyright (c) 1991, 2000, 2001 by Lucent Technologies.
*
* Permission to use, copy, modify, and distribute this software for any
* purpose without fee is hereby granted, provided that this entire notice
* is included in all copies of any software which is or includes a copy
* or modification of this software and in all copies of the supporting
* documentation for such software.
*
* THIS SOFTWARE IS BEING PROVIDED "AS IS", WITHOUT ANY EXPRESS OR IMPLIED
* WARRANTY. IN PARTICULAR, NEITHER THE AUTHOR NOR LUCENT MAKES ANY
* REPRESENTATION OR WARRANTY OF ANY KIND CONCERNING THE MERCHANTABILITY
* OF THIS SOFTWARE OR ITS FITNESS FOR ANY PARTICULAR PURPOSE.
(continues on next page)
C.3.12 OpenSSL
The modules hashlib, posix, ssl, crypt use the OpenSSL library for added performance if made available by the
operating system. Additionally, the Windows and Mac OS X installers for Python may include a copy of the OpenSSL
libraries, so we include a copy of the OpenSSL license here:
LICENSE ISSUES
==============
The OpenSSL toolkit stays under a dual license, i.e. both the conditions of
the OpenSSL License and the original SSLeay license apply to the toolkit.
See below for the actual license texts. Actually both licenses are BSD-style
Open Source licenses. In case of any license issues related to OpenSSL
please contact [email protected].
OpenSSL License
---------------
/* ====================================================================
* Copyright (c) 1998-2008 The OpenSSL Project. All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
*
* 1. Redistributions of source code must retain the above copyright
* notice, this list of conditions and the following disclaimer.
*
* 2. Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in
* the documentation and/or other materials provided with the
* distribution.
*
* 3. All advertising materials mentioning features or use of this
* software must display the following acknowledgment:
* "This product includes software developed by the OpenSSL Project
* for use in the OpenSSL Toolkit. (http://www.openssl.org/)"
*
* 4. The names "OpenSSL Toolkit" and "OpenSSL Project" must not be used to
* endorse or promote products derived from this software without
* prior written permission. For written permission, please contact
* [email protected].
*
* 5. Products derived from this software may not be called "OpenSSL"
* nor may "OpenSSL" appear in their names without prior written
* permission of the OpenSSL Project.
*
* 6. Redistributions of any form whatsoever must retain the following
* acknowledgment:
* "This product includes software developed by the OpenSSL Project
* for use in the OpenSSL Toolkit (http://www.openssl.org/)"
*
(continues on next page)
C.3.13 expat
The pyexpat extension is built using an included copy of the expat sources unless the build is configured
--with-system-expat:
Copyright (c) 1998, 1999, 2000 Thai Open Source Software Center Ltd
and Clark Cooper
The above copyright notice and this permission notice shall be included
in all copies or substantial portions of the Software.
C.3.14 libffi
The _ctypes extension is built using an included copy of the libffi sources unless the build is configured
--with-system-libffi:
The above copyright notice and this permission notice shall be included
in all copies or substantial portions of the Software.
C.3.15 zlib
The zlib extension is built using an included copy of the zlib sources if the zlib version found on the system is too old
to be used for the build:
1. The origin of this software must not be misrepresented; you must not
claim that you wrote the original software. If you use this software
in a product, an acknowledgment in the product documentation would be
appreciated but is not required.
2. Altered source versions must be plainly marked as such, and must not be
misrepresented as being the original software.
3. This notice may not be removed or altered from any source distribution.
C.3.16 cfuhash
The implementation of the hash table used by the tracemalloc is based on the cfuhash project:
Copyright (c) 2005 Don Owens
All rights reserved.
C.3.17 libmpdec
The _decimal module is built using an included copy of the libmpdec library unless the build is configured
--with-system-libmpdec:
Copyright (c) 2008-2020 Stefan Krah. All rights reserved.
The C14N 2.0 test suite in the test package (Lib/test/xmltestdata/c14n-20/) was retrieved from the W3C
website at https://www.w3.org/TR/xml-c14n2-testcases/ and is distributed under the 3-clause BSD license:
COPYRIGHT
See History and License for complete license and permissions information.
269
The Python/C API, Release 3.9.6
Non-alphabetical A
..., 235 abort(), 39
2to3, 235 abs
>>>, 235 built-in function, 66
__all__ (package variable), 40 abstract base class, 235
__dict__ (module attribute), 125 allocfunc (C type), 225
__doc__ (module attribute), 125 annotation, 235
__file__ (module attribute), 125 argument, 235
__future__, 239 argv (in module sys), 149
__import__ ascii
built-in function, 40 built-in function, 59
__loader__ (module attribute), 125 asynchronous context manager, 236
__main__ asynchronous generator, 236
module, 11, 146, 157 asynchronous generator iterator, 236
__name__ (module attribute), 125 asynchronous iterable, 236
__package__ (module attribute), 125 asynchronous iterator, 236
__slots__, 245 attribute, 236
_frozen (C type), 42 awaitable, 236
_inittab (C type), 43
_Py_c_diff (C function), 89 B
_Py_c_neg (C function), 89 BDFL, 236
_Py_c_pow (C function), 89 binary file, 236
_Py_c_prod (C function), 89 binaryfunc (C type), 226
_Py_c_quot (C function), 89 buffer interface
_Py_c_sum (C function), 89 (see buffer protocol), 72
_Py_InitializeMain (C function), 177 buffer object
_Py_NoneStruct (C variable), 189 (see buffer protocol), 72
_PyBytes_Resize (C function), 92 buffer protocol, 72
_PyCFunctionFast (C type), 191 built-in function
_PyCFunctionFastWithKeywords (C type), 192 __import__, 40
_PyFrameEvalFunction (C type), 155 abs, 66
_PyInterpreterState_GetEvalFrameFunc (C ascii, 59
function), 155 bytes, 59
_PyInterpreterState_SetEvalFrameFunc (C classmethod, 193
function), 155 compile, 41
_PyObject_New (C function), 189 divmod, 66
_PyObject_NewVar (C function), 189 float, 68
_PyTuple_Resize (C function), 113 hash, 59, 206
_thread int, 68
module, 152 len, 60, 69, 71, 115, 117, 119
pow, 66, 67
repr, 59, 206
271
The Python/C API, Release 3.9.6
272 Index
The Python/C API, Release 3.9.6
object, 119 L
function, 239 lambda, 241
object, 120 LBYL, 241
function annotation, 239 len
built-in function, 60, 69, 71, 115, 117, 119
G lenfunc (C type), 226
garbage collection, 239 list, 242
generator, 239 object, 115
generator expression, 239, 240 list comprehension, 242
generator iterator, 239 loader, 242
generic function, 240 lock, interpreter, 150
generic type, 240 long integer
getattrfunc (C type), 226 object, 85
getattrofunc (C type), 226 LONG_MAX, 86
getbufferproc (C type), 226
getiterfunc (C type), 226 M
GIL, 240 magic
global interpreter lock, 150, 240 method, 242
magic method, 242
H main(), 147, 149
hash malloc(), 179
built-in function, 59, 206 mapping, 242
hash-based pyc, 240 object, 116
hashable, 240 memoryview
hashfunc (C type), 226 object, 134
meta path finder, 242
I metaclass, 242
IDLE, 240 METH_CLASS (built-in variable), 193
immutable, 240 METH_COEXIST (built-in variable), 193
import path, 240 METH_FASTCALL (built-in variable), 192
importer, 240 METH_NOARGS (built-in variable), 193
importing, 240 METH_O (built-in variable), 193
incr_item(), 10, 11 METH_STATIC (built-in variable), 193
initproc (C type), 226 METH_VARARGS (built-in variable), 192
inquiry (C type), 231 method, 242
instancemethod magic, 242
object, 121 object, 122
int special, 246
built-in function, 68 method resolution order, 242
integer MethodType (in module types), 120, 122
object, 85 module, 242
interactive, 240 __main__, 11, 146, 157
interpreted, 241 _thread, 152
interpreter lock, 150 builtins, 11, 146, 157
interpreter shutdown, 241 object, 125
iterable, 241 search path, 11, 146, 148
iterator, 241 signal, 29
iternextfunc (C type), 226 sys, 11, 146, 157
module spec, 242
K modules (in module sys), 40, 146
key function, 241 ModuleType (in module types), 125
KeyboardInterrupt (built-in exception), 29 MRO, 242
keyword argument, 241 mutable, 242
Index 273
The Python/C API, Release 3.9.6
274 Index
The Python/C API, Release 3.9.6
Index 275
The Python/C API, Release 3.9.6
276 Index
The Python/C API, Release 3.9.6
Index 277
The Python/C API, Release 3.9.6
278 Index
The Python/C API, Release 3.9.6
Index 279
The Python/C API, Release 3.9.6
280 Index
The Python/C API, Release 3.9.6
Index 281
The Python/C API, Release 3.9.6
282 Index
The Python/C API, Release 3.9.6
Index 283
The Python/C API, Release 3.9.6
284 Index
The Python/C API, Release 3.9.6
Index 285
The Python/C API, Release 3.9.6
286 Index
The Python/C API, Release 3.9.6
Index 287
The Python/C API, Release 3.9.6
288 Index
The Python/C API, Release 3.9.6
Index 289
The Python/C API, Release 3.9.6
R T
realloc(), 179 ternaryfunc (C type), 226
reference count, 245 text encoding, 246
regular package, 245 text file, 246
releasebufferproc (C type), 226 traverseproc (C type), 230
repr triple-quoted string, 246
built-in function, 59, 206 tuple
reprfunc (C type), 226 built-in function, 70, 116
richcmpfunc (C type), 226 object, 112
type, 246
S built-in function, 60
sdterr object, 6, 81
stdin stdout, 147 type alias, 246
search type hint, 246
path, module, 11, 146, 148
sequence, 245 U
object, 90 ULONG_MAX, 87
set unaryfunc (C type), 226
object, 119 universal newlines, 247
set comprehension, 246
set_all(), 8 V
setattrfunc (C type), 226 variable annotation, 247
setattrofunc (C type), 226 vectorcallfunc (C type), 61
setswitchinterval() (in module sys), 150 version (in module sys), 148, 149
SIGINT, 29 virtual environment, 247
signal virtual machine, 247
module, 29 visitproc (C type), 230
single dispatch, 246
SIZE_MAX, 87 Z
slice, 246 Zen of Python, 247
special
method, 246
special method, 246
ssizeargfunc (C type), 226
ssizeobjargproc (C type), 226
statement, 246
staticmethod
built-in function, 193
stderr (in module sys), 157
stdin
stdout sdterr, 147
stdin (in module sys), 157
stdout
sdterr, stdin, 147
290 Index