Extending
Extending
Extending
Release 3.12.7
i
3.1.6 Compiling and Linking under Unix-like systems . . . . . . . . . . . . . . . . . . . . . . 64
A Glossary 65
D Copyright 101
Index 103
ii
Extending and Embedding Python, Release 3.12.7
This document describes how to write modules in C or C++ to extend the Python interpreter with new modules.
Those modules can not only define new functions but also new object types and their methods. The document also
describes how to embed the Python interpreter in another application, for use as an extension language. Finally,
it shows how to compile and link extension modules so that they can be loaded dynamically (at run time) into the
interpreter, if the underlying operating system supports this feature.
This document assumes basic knowledge about Python. For an informal introduction to the language, see tutorial-
index. reference-index gives a more formal definition of the language. library-index documents the existing object
types, functions and modules (both built-in and written in Python) that give the language its wide application range.
For a detailed description of the whole Python/C API, see the separate c-api-index.
CONTENTS 1
Extending and Embedding Python, Release 3.12.7
2 CONTENTS
CHAPTER
ONE
This guide only covers the basic tools for creating extensions provided as part of this version of CPython. Third party
tools like Cython, cffi, SWIG and Numba offer both simpler and more sophisticated approaches to creating C and
C++ extensions for Python.
µ See also
3
Extending and Embedding Python, Release 3.12.7
TWO
This section of the guide covers creating C and C++ extensions without assistance from third party tools. It is intended
primarily for creators of those tools, rather than being a recommended way to create your own C extensions.
® Note
The C extension interface is specific to CPython, and extension modules do not work on other Python implemen-
tations. In many cases, it is possible to avoid writing C extensions and preserve portability to other implementa-
tions. For example, if your use case is calling C library functions or system calls, you should consider using the
ctypes module or the cffi library rather than writing custom C code. These modules let you write Python code
to interface with C code and are more portable between implementations of Python than writing and compiling
a C extension module.
Begin by creating a file spammodule.c. (Historically, if a module is called spam, the C file containing its imple-
mentation is called spammodule.c; if the module name is very long, like spammify, the module name can be just
spammify.c.)
The first two lines of our file can be:
#define PY_SSIZE_T_CLEAN
#include <Python.h>
1 An interface for this function already exists in the standard module os — it was chosen as a simple and straightforward example.
5
Extending and Embedding Python, Release 3.12.7
which pulls in the Python API (you can add a comment describing the purpose of the module and a copyright notice
if you like).
® Note
Since Python may define some pre-processor definitions which affect the standard headers on some systems, you
must include Python.h before any standard headers are included.
It is recommended to always define PY_SSIZE_T_CLEAN before including Python.h. See Extracting Parameters
in Extension Functions for a description of this macro.
All user-visible symbols defined by Python.h have a prefix of Py or PY, except those defined in standard header
files. For convenience, and since they are used extensively by the Python interpreter, "Python.h" includes a few
standard header files: <stdio.h>, <string.h>, <errno.h>, and <stdlib.h>. If the latter header file does not
exist on your system, it declares the functions malloc(), free() and realloc() directly.
The next thing we add to our module file is the C function that will be called when the Python expression spam.
system(string) is evaluated (we’ll see shortly how it ends up being called):
static PyObject *
spam_system(PyObject *self, PyObject *args)
{
const char *command;
int sts;
There is a straightforward translation from the argument list in Python (for example, the single expression "ls -l")
to the arguments passed to the C function. The C function always has two arguments, conventionally named self and
args.
The self argument points to the module object for module-level functions; for a method it would point to the object
instance.
The args argument will be a pointer to a Python tuple object containing the arguments. Each item of the tuple
corresponds to an argument in the call’s argument list. The arguments are Python objects — in order to do anything
with them in our C function we have to convert them to C values. The function PyArg_ParseTuple() in the Python
API checks the argument types and converts them to C values. It uses a template string to determine the required
types of the arguments as well as the types of the C variables into which to store the converted values. More about
this later.
PyArg_ParseTuple() returns true (nonzero) if all arguments have the right type and its components have been
stored in the variables whose addresses are passed. It returns false (zero) if an invalid argument list was passed. In
the latter case it also raises an appropriate exception so the calling function can return NULL immediately (as we saw
in the example).
The most common one is PyErr_SetString(). Its arguments are an exception object and a C string. The exception
object is usually a predefined object like PyExc_ZeroDivisionError. The C string indicates the cause of the error
and is converted to a Python string object and stored as the “associated value” of the exception.
Another useful function is PyErr_SetFromErrno(), which only takes an exception argument and constructs the
associated value by inspection of the global variable errno. The most general function is PyErr_SetObject(),
which takes two object arguments, the exception and its associated value. You don’t need to Py_INCREF() the
objects passed to any of these functions.
You can test non-destructively whether an exception has been set with PyErr_Occurred(). This returns the current
exception object, or NULL if no exception has occurred. You normally don’t need to call PyErr_Occurred() to see
whether an error occurred in a function call, since you should be able to tell from the return value.
When a function f that calls another function g detects that the latter fails, f should itself return an error value
(usually NULL or -1). It should not call one of the PyErr_* functions — one has already been called by g. f’s caller
is then supposed to also return an error indication to its caller, again without calling PyErr_*, and so on — the most
detailed cause of the error was already reported by the function that first detected it. Once the error reaches the
Python interpreter’s main loop, this aborts the currently executing Python code and tries to find an exception handler
specified by the Python programmer.
(There are situations where a module can actually give a more detailed error message by calling another PyErr_*
function, and in such cases it is fine to do so. As a general rule, however, this is not necessary, and can cause
information about the cause of the error to be lost: most operations can fail for a variety of reasons.)
To ignore an exception set by a function call that failed, the exception condition must be cleared explicitly by calling
PyErr_Clear(). The only time C code should call PyErr_Clear() is if it doesn’t want to pass the error on to the
interpreter but wants to handle it completely by itself (possibly by trying something else, or pretending nothing went
wrong).
Every failing malloc() call must be turned into an exception — the direct caller of malloc() (or realloc())
must call PyErr_NoMemory() and return a failure indicator itself. All the object-creating functions (for example,
PyLong_FromLong()) already do this, so this note is only relevant to those who call malloc() directly.
Also note that, with the important exception of PyArg_ParseTuple() and friends, functions that return an integer
status usually return a positive value or zero for success and -1 for failure, like Unix system calls.
Finally, be careful to clean up garbage (by making Py_XDECREF() or Py_DECREF() calls for objects you have
already created) when you return an error indicator!
The choice of which exception to raise is entirely yours. There are predeclared C objects corresponding to all built-in
Python exceptions, such as PyExc_ZeroDivisionError, which you can use directly. Of course, you should choose
exceptions wisely — don’t use PyExc_TypeError to mean that a file couldn’t be opened (that should probably be
PyExc_OSError). If something’s wrong with the argument list, the PyArg_ParseTuple() function usually raises
PyExc_TypeError. If you have an argument whose value must be in a particular range or must satisfy other
conditions, PyExc_ValueError is appropriate.
You can also define a new exception that is unique to your module. For this, you usually declare a static object variable
at the beginning of your file:
static PyObject *SpamError;
and initialize it in your module’s initialization function (PyInit_spam()) with an exception object:
PyMODINIT_FUNC
PyInit_spam(void)
{
PyObject *m;
m = PyModule_Create(&spammodule);
if (m == NULL)
return NULL;
return m;
}
Note that the Python name for the exception object is spam.error. The PyErr_NewException() function may
create a class with the base class being Exception (unless another class is passed in instead of NULL), described in
bltin-exceptions.
Note also that the SpamError variable retains a reference to the newly created exception class; this is intentional!
Since the exception could be removed from the module by external code, an owned reference to the class is needed to
ensure that it will not be discarded, causing SpamError to become a dangling pointer. Should it become a dangling
pointer, C code which raises the exception could cause a core dump or other unintended side effects.
We discuss the use of PyMODINIT_FUNC as a function return type later in this sample.
The spam.error exception can be raised in your extension module using a call to PyErr_SetString() as shown
below:
static PyObject *
spam_system(PyObject *self, PyObject *args)
{
const char *command;
int sts;
It returns NULL (the error indicator for functions returning object pointers) if an error is detected in the argument
list, relying on the exception set by PyArg_ParseTuple(). Otherwise the string value of the argument has been
copied to the local variable command. This is a pointer assignment and you are not supposed to modify the string to
which it points (so in Standard C, the variable command should properly be declared as const char *command).
The next statement is a call to the Unix function system(), passing it the string we just got from
PyArg_ParseTuple():
sts = system(command);
Our spam.system() function must return the value of sts as a Python object. This is done using the function
PyLong_FromLong().
return PyLong_FromLong(sts);
In this case, it will return an integer object. (Yes, even integers are objects on the heap in Python!)
If you have a C function that returns no useful argument (a function returning void), the corresponding Python
function must return None. You need this idiom to do so (which is implemented by the Py_RETURN_NONE macro):
Py_INCREF(Py_None);
return Py_None;
Py_None is the C name for the special Python object None. It is a genuine Python object rather than a NULL pointer,
which means “error” in most contexts, as we have seen.
Note the third entry (METH_VARARGS). This is a flag telling the interpreter the calling convention to be used for the
C function. It should normally always be METH_VARARGS or METH_VARARGS | METH_KEYWORDS; a value of 0
means that an obsolete variant of PyArg_ParseTuple() is used.
When using only METH_VARARGS, the function should expect the Python-level parameters to be passed in as a tuple
acceptable for parsing via PyArg_ParseTuple(); more information on this function is provided below.
The METH_KEYWORDS bit may be set in the third field if keyword arguments should be passed to the function. In
this case, the C function should accept a third PyObject * parameter which will be a dictionary of keywords. Use
PyArg_ParseTupleAndKeywords() to parse the arguments to such a function.
This structure, in turn, must be passed to the interpreter in the module’s initialization function. The initialization
function must be named PyInit_name(), where name is the name of the module, and should be the only non-
static item defined in the module file:
PyMODINIT_FUNC
PyInit_spam(void)
{
return PyModule_Create(&spammodule);
}
Note that PyMODINIT_FUNC declares the function as PyObject * return type, declares any special linkage decla-
rations required by the platform, and for C++ declares the function as extern "C".
When the Python program imports module spam for the first time, PyInit_spam() is called. (See below for
comments about embedding Python.) It calls PyModule_Create(), which returns a module object, and inserts
built-in function objects into the newly created module based upon the table (an array of PyMethodDef structures)
found in the module definition. PyModule_Create() returns a pointer to the module object that it creates. It may
abort with a fatal error for certain errors, or return NULL if the module could not be initialized satisfactorily. The init
function must return the module object to its caller, so that it then gets inserted into sys.modules.
When embedding Python, the PyInit_spam() function is not called automatically unless there’s an entry in the
PyImport_Inittab table. To add the module to the initialization table, use PyImport_AppendInittab(),
optionally followed by an import of the module:
int
main(int argc, char *argv[])
{
wchar_t *program = Py_DecodeLocale(argv[0], NULL);
if (program == NULL) {
fprintf(stderr, "Fatal error: cannot decode argv[0]\n");
exit(1);
}
...
PyMem_RawFree(program);
return 0;
}
® Note
Removing entries from sys.modules or importing compiled modules into multiple interpreters within a pro-
cess (or following a fork() without an intervening exec()) can create problems for some extension modules.
Extension module authors should exercise caution when initializing internal data structures.
A more substantial example module is included in the Python source distribution as Modules/xxmodule.c. This
® Note
Unlike our spam example, xxmodule uses multi-phase initialization (new in Python 3.5), where a PyModuleDef
structure is returned from PyInit_spam, and creation of the module is left to the import machinery. For details
on multi-phase initialization, see PEP 489.
spam spammodule.o
and rebuild the interpreter by running make in the toplevel directory. You can also run make in the Modules/
subdirectory, but then you must first rebuild Makefile there by running ‘make Makefile’. (This is necessary each
time you change the Setup file.)
If your module requires additional libraries to link with, these can be listed on the line in the configuration file as
well, for instance:
static PyObject *
my_set_callback(PyObject *dummy, PyObject *args)
{
PyObject *result = NULL;
PyObject *temp;
This function must be registered with the interpreter using the METH_VARARGS flag; this is described in section
The Module’s Method Table and Initialization Function. The PyArg_ParseTuple() function and its arguments are
documented in section Extracting Parameters in Extension Functions.
The macros Py_XINCREF() and Py_XDECREF() increment/decrement the reference count of an object and are safe
in the presence of NULL pointers (but note that temp will not be NULL in this context). More info on them in section
Reference Counts.
Later, when it is time to call the function, you call the C function PyObject_CallObject(). This function has
two arguments, both pointers to arbitrary Python objects: the Python function, and the argument list. The argument
list must always be a tuple object, whose length is the number of arguments. To call the Python function with no
arguments, pass in NULL, or an empty tuple; to call it with one argument, pass a singleton tuple. Py_BuildValue()
returns a tuple when its format string consists of zero or more format codes between parentheses. For example:
int arg;
PyObject *arglist;
PyObject *result;
...
arg = 123;
...
/* Time to call the callback */
arglist = Py_BuildValue("(i)", arg);
result = PyObject_CallObject(my_callback, arglist);
Py_DECREF(arglist);
PyObject_CallObject() returns a Python object pointer: this is the return value of the Python func-
tion. PyObject_CallObject() is “reference-count-neutral” with respect to its arguments. In the exam-
ple a new tuple was created to serve as the argument list, which is Py_DECREF()-ed immediately after the
PyObject_CallObject() call.
The return value of PyObject_CallObject() is “new”: either it is a brand new object, or it is an existing object
whose reference count has been incremented. So, unless you want to save it in a global variable, you should somehow
Py_DECREF() the result, even (especially!) if you are not interested in its value.
Before you do this, however, it is important to check that the return value isn’t NULL. If it is, the Python function
terminated by raising an exception. If the C code that called PyObject_CallObject() is called from Python, it
should now return an error indication to its Python caller, so the interpreter can print a stack trace, or the calling
Python code can handle the exception. If this is not possible or desirable, the exception should be cleared by calling
PyErr_Clear(). For example:
if (result == NULL)
return NULL; /* Pass error back */
...use result...
Py_DECREF(result);
Depending on the desired interface to the Python callback function, you may also have to provide an argument list to
PyObject_CallObject(). In some cases the argument list is also provided by the Python program, through the
same interface that specified the callback function. It can then be saved and used in the same manner as the function
object. In other cases, you may have to construct a new tuple to pass as the argument list. The simplest way to do this
is to call Py_BuildValue(). For example, if you want to pass an integral event code, you might use the following
code:
PyObject *arglist;
...
arglist = Py_BuildValue("(l)", eventcode);
result = PyObject_CallObject(my_callback, arglist);
Py_DECREF(arglist);
if (result == NULL)
return NULL; /* Pass error back */
/* Here maybe use the result */
Py_DECREF(result);
Note the placement of Py_DECREF(arglist) immediately after the call, before the error check! Also note that
strictly speaking this code is not complete: Py_BuildValue() may run out of memory, and this should be checked.
You may also call a function with keyword arguments by using PyObject_Call(), which supports arguments and
keyword arguments. As in the above example, we use Py_BuildValue() to construct the dictionary.
PyObject *dict;
...
dict = Py_BuildValue("{s:i}", "name", val);
result = PyObject_Call(my_callback, NULL, dict);
Py_DECREF(dict);
if (result == NULL)
return NULL; /* Pass error back */
/* Here maybe use the result */
Py_DECREF(result);
The arg argument must be a tuple object containing an argument list passed from Python to a C function. The format
argument must be a format string, whose syntax is explained in arg-parsing in the Python/C API Reference Manual.
The remaining arguments must be addresses of variables whose type is determined by the format string.
Note that while PyArg_ParseTuple() checks that the Python arguments have the required types, it cannot check
the validity of the addresses of C variables passed to the call: if you make mistakes there, your code will probably
crash or at least overwrite random bits in memory. So be careful!
Note that any Python object references which are provided to the caller are borrowed references; do not decrement
their reference count!
Some example calls:
int ok;
int i, j;
long k, l;
const char *s;
(continues on next page)
{
const char *file;
const char *mode = "r";
int bufsize = 0;
ok = PyArg_ParseTuple(args, "s|si", &file, &mode, &bufsize);
/* A string, and optionally another string and an integer */
/* Possible Python calls:
f('spam')
f('spam', 'w')
f('spam', 'wb', 100000) */
}
{
int left, top, right, bottom, h, v;
ok = PyArg_ParseTuple(args, "((ii)(ii))(ii)",
&left, &top, &right, &bottom, &h, &v);
/* A rectangle and a point */
/* Possible Python call:
f(((0, 0), (400, 300)), (10, 10)) */
}
{
Py_complex c;
ok = PyArg_ParseTuple(args, "D:myfunction", &c);
/* a complex, also providing a function name for errors */
/* Possible Python call: myfunction(1+2j) */
}
The arg and format parameters are identical to those of the PyArg_ParseTuple() function. The kwdict parameter
is the dictionary of keywords received as the third parameter from the Python runtime. The kwlist parameter is a
NULL-terminated list of strings which identify the parameters; the names are matched with the type information from
format from left to right. On success, PyArg_ParseTupleAndKeywords() returns true, otherwise it returns false
and raises an appropriate exception.
® Note
Nested tuples cannot be parsed when using keyword arguments! Keyword parameters passed in which are not
present in the kwlist will cause TypeError to be raised.
Here is an example module which uses keywords, based on an example by Geoff Philbrick (philbrick@hks.com):
static PyObject *
keywdarg_parrot(PyObject *self, PyObject *args, PyObject *keywds)
{
int voltage;
const char *state = "a stiff";
const char *action = "voom";
const char *type = "Norwegian Blue";
Py_RETURN_NONE;
}
PyMODINIT_FUNC
PyInit_keywdarg(void)
{
return PyModule_Create(&keywdargmodule);
}
It recognizes a set of format units similar to the ones recognized by PyArg_ParseTuple(), but the arguments
(which are input to the function, not output) must not be pointers, just values. It returns a new Python object, suitable
for returning from a C function called from Python.
One difference with PyArg_ParseTuple(): while the latter requires its first argument to be a tuple (since Python
argument lists are always represented as tuples internally), Py_BuildValue() does not always build a tuple. It
builds a tuple only if its format string contains two or more format units. If the format string is empty, it returns
None; if it contains exactly one format unit, it returns whatever object is described by that format unit. To force it to
return a tuple of size 0 or one, parenthesize the format string.
Examples (to the left the call, to the right the resulting Python value):
Py_BuildValue("") None
Py_BuildValue("i", 123) 123
Py_BuildValue("iii", 123, 456, 789) (123, 456, 789)
Py_BuildValue("s", "hello") 'hello'
Py_BuildValue("y", "hello") b'hello'
Py_BuildValue("ss", "hello", "world") ('hello', 'world')
Py_BuildValue("s#", "hello", 4) 'hell'
Py_BuildValue("y#", "hello", 4) b'hell'
Py_BuildValue("()") ()
Py_BuildValue("(i)", 123) (123,)
Py_BuildValue("(ii)", 123, 456) (123, 456)
Py_BuildValue("(i,i)", 123, 456) (123, 456)
Py_BuildValue("[i,i]", 123, 456) [123, 456]
Py_BuildValue("{s:i,s:i}",
"abc", 123, "def", 456) {'abc': 123, 'def': 456}
Py_BuildValue("((ii)(ii)) (ii)",
1, 2, 3, 4, 5, 6) (((1, 2), (3, 4)), (5, 6))
a reference to it is deleted. When the counter reaches zero, the last reference to the object has been deleted and the
object is freed.
An alternative strategy is called automatic garbage collection. (Sometimes, reference counting is also referred to as
a garbage collection strategy, hence my use of “automatic” to distinguish the two.) The big advantage of automatic
garbage collection is that the user doesn’t need to call free() explicitly. (Another claimed advantage is an improve-
ment in speed or memory usage — this is no hard fact however.) The disadvantage is that for C, there is no truly
portable automatic garbage collector, while reference counting can be implemented portably (as long as the functions
malloc() and free() are available — which the C Standard guarantees). Maybe some day a sufficiently portable
automatic garbage collector will be available for C. Until then, we’ll have to live with reference counts.
While Python uses the traditional reference counting implementation, it also offers a cycle detector that works to
detect reference cycles. This allows applications to not worry about creating direct or indirect circular references;
these are the weakness of garbage collection implemented using only reference counting. Reference cycles consist
of objects which contain (possibly indirect) references to themselves, so that each object in the cycle has a reference
count which is non-zero. Typical reference counting implementations are not able to reclaim the memory belonging
to any objects in a reference cycle, or referenced from the objects in the cycle, even though there are no further
references to the cycle itself.
The cycle detector is able to detect garbage cycles and can reclaim them. The gc module exposes a way to run
the detector (the collect() function), as well as configuration interfaces and the ability to disable the detector at
runtime.
Ownership Rules
Whenever an object reference is passed into or out of a function, it is part of the function’s interface specification
whether ownership is transferred with the reference or not.
Most functions that return a reference to an object pass on ownership with the reference. In particular, all functions
whose function it is to create a new object, such as PyLong_FromLong() and Py_BuildValue(), pass ownership
to the receiver. Even if the object is not actually new, you still receive ownership of a new reference to that object.
For instance, PyLong_FromLong() maintains a cache of popular values and can return a reference to a cached item.
2 The metaphor of “borrowing” a reference is not completely correct: the owner still has a copy of the reference.
3 Checking that the reference count is at least 1 does not work — the reference count itself could be in freed memory and may thus be reused
for another object!
Many functions that extract objects from other objects also transfer ownership with the reference, for instance
PyObject_GetAttrString(). The picture is less clear, here, however, since a few common routines are ex-
ceptions: PyTuple_GetItem(), PyList_GetItem(), PyDict_GetItem(), and PyDict_GetItemString()
all return references that you borrow from the tuple, list or dictionary.
The function PyImport_AddModule() also returns a borrowed reference, even though it may actually create the
object it returns: this is possible because an owned reference to the object is stored in sys.modules.
When you pass an object reference into another function, in general, the function borrows the reference from you —
if it needs to store it, it will use Py_INCREF() to become an independent owner. There are exactly two important
exceptions to this rule: PyTuple_SetItem() and PyList_SetItem(). These functions take over ownership of
the item passed to them — even if they fail! (Note that PyDict_SetItem() and friends don’t take over ownership
— they are “normal.”)
When a C function is called from Python, it borrows references to its arguments from the caller. The caller owns a
reference to the object, so the borrowed reference’s lifetime is guaranteed until the function returns. Only when such a
borrowed reference must be stored or passed on, it must be turned into an owned reference by calling Py_INCREF().
The object reference returned from a C function that is called from Python must be an owned reference — ownership
is transferred from the function to its caller.
Thin Ice
There are a few situations where seemingly harmless use of a borrowed reference can lead to problems. These all
have to do with implicit invocations of the interpreter, which can cause the owner of a reference to dispose of it.
The first and most important case to know about is using Py_DECREF() on an unrelated object while borrowing a
reference to a list item. For instance:
void
bug(PyObject *list)
{
PyObject *item = PyList_GetItem(list, 0);
PyList_SetItem(list, 1, PyLong_FromLong(0L));
PyObject_Print(item, stdout, 0); /* BUG! */
}
This function first borrows a reference to list[0], then replaces list[1] with the value 0, and finally prints the
borrowed reference. Looks harmless, right? But it’s not!
Let’s follow the control flow into PyList_SetItem(). The list owns references to all its items, so when item 1
is replaced, it has to dispose of the original item 1. Now let’s suppose the original item 1 was an instance of a
user-defined class, and let’s further suppose that the class defined a __del__() method. If this class instance has a
reference count of 1, disposing of it will call its __del__() method.
Since it is written in Python, the __del__() method can execute arbitrary Python code. Could it perhaps do
something to invalidate the reference to item in bug()? You bet! Assuming that the list passed into bug() is
accessible to the __del__() method, it could execute a statement to the effect of del list[0], and assuming this
was the last reference to that object, it would free the memory associated with it, thereby invalidating item.
The solution, once you know the source of the problem, is easy: temporarily increment the reference count. The
correct version of the function reads:
void
no_bug(PyObject *list)
{
PyObject *item = PyList_GetItem(list, 0);
Py_INCREF(item);
PyList_SetItem(list, 1, PyLong_FromLong(0L));
PyObject_Print(item, stdout, 0);
(continues on next page)
This is a true story. An older version of Python contained variants of this bug and someone spent a considerable
amount of time in a C debugger to figure out why his __del__() methods would fail…
The second case of problems with a borrowed reference is a variant involving threads. Normally, multiple threads in
the Python interpreter can’t get in each other’s way, because there is a global lock protecting Python’s entire object
space. However, it is possible to temporarily release this lock using the macro Py_BEGIN_ALLOW_THREADS, and to
re-acquire it using Py_END_ALLOW_THREADS. This is common around blocking I/O calls, to let other threads use
the processor while waiting for the I/O to complete. Obviously, the following function has the same problem as the
previous one:
void
bug(PyObject *list)
{
PyObject *item = PyList_GetItem(list, 0);
Py_BEGIN_ALLOW_THREADS
...some blocking I/O call...
Py_END_ALLOW_THREADS
PyObject_Print(item, stdout, 0); /* BUG! */
}
NULL Pointers
In general, functions that take object references as arguments do not expect you to pass them NULL pointers, and will
dump core (or cause later core dumps) if you do so. Functions that return object references generally return NULL
only to indicate that an exception occurred. The reason for not testing for NULL arguments is that functions often
pass the objects they receive on to other function — if each function were to test for NULL, there would be a lot of
redundant tests and the code would run more slowly.
It is better to test for NULL only at the “source:” when a pointer that may be NULL is received, for example, from
malloc() or from a function that may raise an exception.
The macros Py_INCREF() and Py_DECREF() do not check for NULL pointers — however, their variants
Py_XINCREF() and Py_XDECREF() do.
The macros for checking for a particular object type (Pytype_Check()) don’t check for NULL pointers — again,
there is much code that calls several of these in a row to test an object against various different expected types, and
this would generate redundant tests. There are no variants with NULL checking.
The C function calling mechanism guarantees that the argument list passed to C functions (args in the examples) is
never NULL — in fact it guarantees that it is always a tuple4 .
It is a severe error to ever let a NULL pointer “escape” to the Python user.
modulename.attributename
The convenience function PyCapsule_Import() makes it easy to load a C API provided via a Capsule, but only
if the Capsule’s name matches this convention. This behavior gives C API users a high degree of certainty that the
Capsule they load contains the correct C API.
The following example demonstrates an approach that puts most of the burden on the writer of the exporting module,
which is appropriate for commonly used library modules. It stores all C API pointers (just one in the example!) in an
array of void pointers which becomes the value of a Capsule. The header file corresponding to the module provides
a macro that takes care of importing the module and retrieving its C API pointers; client modules only have to call
this macro before accessing the C API.
The exporting module is a modification of the spam module from section A Simple Example. The function spam.
system() does not call the C library function system() directly, but a function PySpam_System(), which would
of course do something more complicated in reality (such as adding “spam” to every command). This function
PySpam_System() is also exported to other extension modules.
The function PySpam_System() is a plain C function, declared static like everything else:
static int
PySpam_System(const char *command)
{
(continues on next page)
static PyObject *
spam_system(PyObject *self, PyObject *args)
{
const char *command;
int sts;
#include <Python.h>
#define SPAM_MODULE
#include "spammodule.h"
The #define is used to tell the header file that it is being included in the exporting module, not a client module.
Finally, the module’s initialization function must take care of initializing the C API pointer array:
PyMODINIT_FUNC
PyInit_spam(void)
{
PyObject *m;
static void *PySpam_API[PySpam_API_pointers];
PyObject *c_api_object;
m = PyModule_Create(&spammodule);
if (m == NULL)
return NULL;
return m;
}
Note that PySpam_API is declared static; otherwise the pointer array would disappear when PyInit_spam()
terminates!
The bulk of the work is in the header file spammodule.h, which looks like this:
#ifndef Py_SPAMMODULE_H
#define Py_SPAMMODULE_H
#ifdef __cplusplus
extern "C" {
#endif
/* C API functions */
#define PySpam_System_NUM 0
#define PySpam_System_RETURN int
#define PySpam_System_PROTO (const char *command)
#ifdef SPAM_MODULE
/* This section is used when compiling spammodule.c */
#else
/* This section is used in modules that use spammodule's API */
#define PySpam_System \
(*(PySpam_System_RETURN (*)PySpam_System_PROTO) PySpam_API[PySpam_System_NUM])
#endif
#ifdef __cplusplus
}
#endif
#endif /* !defined(Py_SPAMMODULE_H) */
All that a client module must do in order to have access to the function PySpam_System() is to call the function
(or rather macro) import_spam() in its initialization function:
PyMODINIT_FUNC
PyInit_client(void)
{
PyObject *m;
The main disadvantage of this approach is that the file spammodule.h is rather complicated. However, the basic
structure is the same for each function that is exported, so it has to be learned only once.
Finally it should be mentioned that Capsules offer additional functionality, which is especially useful for memory
allocation and deallocation of the pointer stored in a Capsule. The details are described in the Python/C API Reference
Manual in the section capsules and in the implementation of Capsules (files Include/pycapsule.h and Objects/
pycapsule.c in the Python source code distribution).
® Note
What we’re showing here is the traditional way of defining static extension types. It should be adequate for most
uses. The C API also allows defining heap-allocated extension types using the PyType_FromSpec() function,
which isn’t covered in this tutorial.
#define PY_SSIZE_T_CLEAN
#include <Python.h>
typedef struct {
PyObject_HEAD
/* Type-specific fields go here. */
} CustomObject;
PyMODINIT_FUNC
PyInit_custom(void)
{
PyObject *m;
if (PyType_Ready(&CustomType) < 0)
return NULL;
m = PyModule_Create(&custommodule);
if (m == NULL)
return NULL;
Py_INCREF(&CustomType);
if (PyModule_AddObject(m, "Custom", (PyObject *) &CustomType) < 0) {
Py_DECREF(&CustomType);
Py_DECREF(m);
return NULL;
}
return m;
}
Now that’s quite a bit to take in at once, but hopefully bits will seem familiar from the previous chapter. This file
defines three things:
1. What a Custom object contains: this is the CustomObject struct, which is allocated once for each Custom
instance.
2. How the Custom type behaves: this is the CustomType struct, which defines a set of flags and function
pointers that the interpreter inspects when specific operations are requested.
3. How to initialize the custom module: this is the PyInit_custom function and the associated custommodule
struct.
The first bit is:
typedef struct {
PyObject_HEAD
} CustomObject;
This is what a Custom object will contain. PyObject_HEAD is mandatory at the start of each object struct and
defines a field called ob_base of type PyObject, containing a pointer to a type object and a reference count (these
can be accessed using the macros Py_TYPE and Py_REFCNT respectively). The reason for the macro is to abstract
away the layout and to enable additional fields in debug builds.
® Note
There is no semicolon above after the PyObject_HEAD macro. Be wary of adding one by accident: some
Of course, objects generally store additional data besides the standard PyObject_HEAD boilerplate; for example,
here is the definition for standard Python floats:
typedef struct {
PyObject_HEAD
double ob_fval;
} PyFloatObject;
® Note
We recommend using C99-style designated initializers as above, to avoid listing all the PyTypeObject fields
that you don’t care about and also to avoid caring about the fields’ declaration order.
The actual definition of PyTypeObject in object.h has many more fields than the definition above. The remaining
fields will be filled with zeros by the C compiler, and it’s common practice to not specify them explicitly unless you
need them.
We’re going to pick it apart, one field at a time:
.ob_base = PyVarObject_HEAD_INIT(NULL, 0)
This line is mandatory boilerplate to initialize the ob_base field mentioned above.
.tp_name = "custom.Custom",
The name of our type. This will appear in the default textual representation of our objects and in some error messages,
for example:
Note that the name is a dotted name that includes both the module name and the name of the type within the module.
The module in this case is custom and the type is Custom, so we set the type name to custom.Custom. Using the
real dotted import path is important to make your type compatible with the pydoc and pickle modules.
.tp_basicsize = sizeof(CustomObject),
.tp_itemsize = 0,
This is so that Python knows how much memory to allocate when creating new Custom instances. tp_itemsize
is only used for variable-sized objects and should otherwise be zero.
® Note
If you want your type to be subclassable from Python, and your type has the same tp_basicsize as its base
type, you may have problems with multiple inheritance. A Python subclass of your type will have to list your
type first in its __bases__, or else it will not be able to call your type’s __new__() method without getting an
error. You can avoid this problem by ensuring that your type has a larger value for tp_basicsize than its base
type does. Most of the time, this will be true anyway, because either your base type will be object, or else you
will be adding data members to your base type, and therefore increasing its size.
.tp_flags = Py_TPFLAGS_DEFAULT,
All types should include this constant in their flags. It enables all of the members defined until at least Python 3.3. If
you need further members, you will need to OR the corresponding flags.
We provide a doc string for the type in tp_doc.
To enable object creation, we have to provide a tp_new handler. This is the equivalent of the Python method
__new__(), but has to be specified explicitly. In this case, we can just use the default implementation provided by
the API function PyType_GenericNew().
.tp_new = PyType_GenericNew,
Everything else in the file should be familiar, except for some code in PyInit_custom():
if (PyType_Ready(&CustomType) < 0)
return;
This initializes the Custom type, filling in a number of members to the appropriate default values, including ob_type
that we initially set to NULL.
Py_INCREF(&CustomType);
if (PyModule_AddObject(m, "Custom", (PyObject *) &CustomType) < 0) {
Py_DECREF(&CustomType);
Py_DECREF(m);
return NULL;
}
This adds the type to the module dictionary. This allows us to create Custom instances by calling the Custom class:
That’s it! All that remains is to build it; put the above code in a file called custom.c,
[build-system]
requires = ["setuptools"]
build-backend = "setuptools.build_meta"
[project]
name = "custom"
version = "1"
in a shell should produce a file custom.so in a subdirectory and install it; now fire up Python — you should be able
to import custom and play around with Custom objects.
That wasn’t so hard, was it?
Of course, the current Custom type is pretty uninteresting. It has no data and doesn’t do anything. It can’t even be
subclassed.
#define PY_SSIZE_T_CLEAN
#include <Python.h>
#include <stddef.h> /* for offsetof() */
typedef struct {
PyObject_HEAD
PyObject *first; /* first name */
PyObject *last; /* last name */
int number;
} CustomObject;
static void
Custom_dealloc(CustomObject *self)
{
Py_XDECREF(self->first);
Py_XDECREF(self->last);
Py_TYPE(self)->tp_free((PyObject *) self);
}
static PyObject *
Custom_new(PyTypeObject *type, PyObject *args, PyObject *kwds)
{
CustomObject *self;
self = (CustomObject *) type->tp_alloc(type, 0);
if (self != NULL) {
self->first = PyUnicode_FromString("");
if (self->first == NULL) {
Py_DECREF(self);
return NULL;
}
self->last = PyUnicode_FromString("");
if (self->last == NULL) {
Py_DECREF(self);
return NULL;
}
self->number = 0;
}
return (PyObject *) self;
(continues on next page)
static int
Custom_init(CustomObject *self, PyObject *args, PyObject *kwds)
{
static char *kwlist[] = {"first", "last", "number", NULL};
PyObject *first = NULL, *last = NULL;
if (first) {
Py_XSETREF(self->first, Py_NewRef(first));
}
if (last) {
Py_XSETREF(self->last, Py_NewRef(last));
}
return 0;
}
static PyObject *
Custom_name(CustomObject *self, PyObject *Py_UNUSED(ignored))
{
if (self->first == NULL) {
PyErr_SetString(PyExc_AttributeError, "first");
return NULL;
}
if (self->last == NULL) {
PyErr_SetString(PyExc_AttributeError, "last");
return NULL;
}
return PyUnicode_FromFormat("%S %S", self->first, self->last);
}
PyMODINIT_FUNC
PyInit_custom2(void)
{
PyObject *m;
if (PyType_Ready(&CustomType) < 0)
return NULL;
m = PyModule_Create(&custommodule);
if (m == NULL)
return NULL;
return m;
}
Because we now have data to manage, we have to be more careful about object allocation and deallocation. At a
minimum, we need a deallocation method:
static void
Custom_dealloc(CustomObject *self)
{
Py_XDECREF(self->first);
(continues on next page)
This method first clears the reference counts of the two Python attributes. Py_XDECREF() correctly handles the case
where its argument is NULL (which might happen here if tp_new failed midway). It then calls the tp_free member
of the object’s type (computed by Py_TYPE(self)) to free the object’s memory. Note that the object’s type might
not be CustomType, because the object may be an instance of a subclass.
® Note
The explicit cast to destructor above is needed because we defined Custom_dealloc to take a
CustomObject * argument, but the tp_dealloc function pointer expects to receive a PyObject * argu-
ment. Otherwise, the compiler will emit a warning. This is object-oriented polymorphism, in C!
We want to make sure that the first and last names are initialized to empty strings, so we provide a tp_new imple-
mentation:
static PyObject *
Custom_new(PyTypeObject *type, PyObject *args, PyObject *kwds)
{
CustomObject *self;
self = (CustomObject *) type->tp_alloc(type, 0);
if (self != NULL) {
self->first = PyUnicode_FromString("");
if (self->first == NULL) {
Py_DECREF(self);
return NULL;
}
self->last = PyUnicode_FromString("");
if (self->last == NULL) {
Py_DECREF(self);
return NULL;
}
self->number = 0;
}
return (PyObject *) self;
}
.tp_new = Custom_new,
The tp_new handler is responsible for creating (as opposed to initializing) objects of the type. It is exposed in Python
as the __new__() method. It is not required to define a tp_new member, and indeed many extension types will
simply reuse PyType_GenericNew() as done in the first version of the Custom type above. In this case, we use
the tp_new handler to initialize the first and last attributes to non-NULL default values.
tp_new is passed the type being instantiated (not necessarily CustomType, if a subclass is instantiated) and any
arguments passed when the type was called, and is expected to return the instance created. tp_new handlers always
accept positional and keyword arguments, but they often ignore the arguments, leaving the argument handling to
initializer (a.k.a. tp_init in C or __init__ in Python) methods.
® Note
Since memory allocation may fail, we must check the tp_alloc result against NULL before proceeding.
® Note
We didn’t fill the tp_alloc slot ourselves. Rather PyType_Ready() fills it for us by inheriting it from our base
class, which is object by default. Most types use the default allocation strategy.
® Note
If you are creating a co-operative tp_new (one that calls a base type’s tp_new or __new__()), you must not
try to determine what method to call using method resolution order at runtime. Always statically determine what
type you are going to call, and call its tp_new directly, or via type->tp_base->tp_new. If you do not do
this, Python subclasses of your type that also inherit from other Python-defined classes may not work correctly.
(Specifically, you may not be able to create instances of such subclasses without getting a TypeError.)
We also define an initialization function which accepts arguments to provide initial values for our instance:
static int
Custom_init(CustomObject *self, PyObject *args, PyObject *kwds)
{
static char *kwlist[] = {"first", "last", "number", NULL};
PyObject *first = NULL, *last = NULL, *tmp;
if (first) {
tmp = self->first;
Py_INCREF(first);
self->first = first;
Py_XDECREF(tmp);
}
if (last) {
tmp = self->last;
Py_INCREF(last);
self->last = last;
Py_XDECREF(tmp);
}
return 0;
}
The tp_init slot is exposed in Python as the __init__() method. It is used to initialize an object after it’s created.
Initializers always accept positional and keyword arguments, and they should return either 0 on success or -1 on error.
Unlike the tp_new handler, there is no guarantee that tp_init is called at all (for example, the pickle module by
default doesn’t call __init__() on unpickled instances). It can also be called multiple times. Anyone can call the
__init__() method on our objects. For this reason, we have to be extra careful when assigning the new attribute
values. We might be tempted, for example to assign the first member like this:
if (first) {
Py_XDECREF(self->first);
Py_INCREF(first);
self->first = first;
}
But this would be risky. Our type doesn’t restrict the type of the first member, so it could be any kind of object.
It could have a destructor that causes code to be executed that tries to access the first member; or that destructor
could release the Global interpreter Lock and let arbitrary code run in other threads that accesses and modifies our
object.
To be paranoid and protect ourselves against this possibility, we almost always reassign members before decrementing
their reference counts. When don’t we have to do this?
• when we absolutely know that the reference count is greater than 1;
• when we know that deallocation of the object1 will neither release the GIL nor cause any calls back into our
type’s code;
• when decrementing a reference count in a tp_dealloc handler on a type which doesn’t support cyclic garbage
collection2 .
We want to expose our instance variables as attributes. There are a number of ways to do that. The simplest way is
to define member definitions:
.tp_members = Custom_members,
Each member definition has a member name, type, offset, access flags and documentation string. See the Generic
Attribute Management section below for details.
A disadvantage of this approach is that it doesn’t provide a way to restrict the types of objects that can be assigned
to the Python attributes. We expect the first and last names to be strings, but any Python objects can be assigned.
Further, the attributes can be deleted, setting the C pointers to NULL. Even though we can make sure the members
are initialized to non-NULL values, the members can be set to NULL if the attributes are deleted.
We define a single method, Custom.name(), that outputs the objects name as the concatenation of the first and last
names.
static PyObject *
Custom_name(CustomObject *self, PyObject *Py_UNUSED(ignored))
(continues on next page)
1 This is true when we know that the object is a basic type, like a string or a float.
2 We relied on this in the tp_dealloc handler in this example, because our type doesn’t support garbage collection.
The method is implemented as a C function that takes a Custom (or Custom subclass) instance as the first argument.
Methods always take an instance as the first argument. Methods often take positional and keyword arguments as
well, but in this case we don’t take any and don’t need to accept a positional argument tuple or keyword argument
dictionary. This method is equivalent to the Python method:
def name(self):
return "%s %s" % (self.first, self.last)
Note that we have to check for the possibility that our first and last members are NULL. This is because they can
be deleted, in which case they are set to NULL. It would be better to prevent deletion of these attributes and to restrict
the attribute values to be strings. We’ll see how to do that in the next section.
Now that we’ve defined the method, we need to create an array of method definitions:
(note that we used the METH_NOARGS flag to indicate that the method is expecting no arguments other than self)
and assign it to the tp_methods slot:
.tp_methods = Custom_methods,
Finally, we’ll make our type usable as a base class for subclassing. We’ve written our methods carefully so far so that
they don’t make any assumptions about the type of the object being created or used, so all we need to do is to add
the Py_TPFLAGS_BASETYPE to our class flag definition:
We rename PyInit_custom() to PyInit_custom2(), update the module name in the PyModuleDef struct, and
update the full class name in the PyTypeObject struct.
Finally, we update our setup.py file to include the new module,
typedef struct {
PyObject_HEAD
PyObject *first; /* first name */
PyObject *last; /* last name */
int number;
} CustomObject;
static void
Custom_dealloc(CustomObject *self)
{
Py_XDECREF(self->first);
Py_XDECREF(self->last);
Py_TYPE(self)->tp_free((PyObject *) self);
}
static PyObject *
Custom_new(PyTypeObject *type, PyObject *args, PyObject *kwds)
{
CustomObject *self;
self = (CustomObject *) type->tp_alloc(type, 0);
if (self != NULL) {
self->first = PyUnicode_FromString("");
if (self->first == NULL) {
Py_DECREF(self);
return NULL;
}
self->last = PyUnicode_FromString("");
if (self->last == NULL) {
Py_DECREF(self);
return NULL;
}
self->number = 0;
}
return (PyObject *) self;
}
static int
Custom_init(CustomObject *self, PyObject *args, PyObject *kwds)
{
static char *kwlist[] = {"first", "last", "number", NULL};
PyObject *first = NULL, *last = NULL;
if (first) {
Py_SETREF(self->first, Py_NewRef(first));
}
if (last) {
Py_SETREF(self->last, Py_NewRef(last));
}
return 0;
}
static PyObject *
Custom_getfirst(CustomObject *self, void *closure)
{
return Py_NewRef(self->first);
}
static int
Custom_setfirst(CustomObject *self, PyObject *value, void *closure)
{
if (value == NULL) {
PyErr_SetString(PyExc_TypeError, "Cannot delete the first attribute");
return -1;
}
if (!PyUnicode_Check(value)) {
PyErr_SetString(PyExc_TypeError,
"The first attribute value must be a string");
return -1;
}
Py_SETREF(self->first, Py_NewRef(value));
return 0;
}
static PyObject *
Custom_getlast(CustomObject *self, void *closure)
{
return Py_NewRef(self->last);
}
static int
Custom_setlast(CustomObject *self, PyObject *value, void *closure)
{
if (value == NULL) {
PyErr_SetString(PyExc_TypeError, "Cannot delete the last attribute");
return -1;
}
if (!PyUnicode_Check(value)) {
PyErr_SetString(PyExc_TypeError,
"The last attribute value must be a string");
return -1;
(continues on next page)
static PyObject *
Custom_name(CustomObject *self, PyObject *Py_UNUSED(ignored))
{
return PyUnicode_FromFormat("%S %S", self->first, self->last);
}
PyMODINIT_FUNC
PyInit_custom3(void)
{
PyObject *m;
if (PyType_Ready(&CustomType) < 0)
return NULL;
m = PyModule_Create(&custommodule);
(continues on next page)
return m;
}
To provide greater control, over the first and last attributes, we’ll use custom getter and setter functions. Here
are the functions for getting and setting the first attribute:
static PyObject *
Custom_getfirst(CustomObject *self, void *closure)
{
Py_INCREF(self->first);
return self->first;
}
static int
Custom_setfirst(CustomObject *self, PyObject *value, void *closure)
{
PyObject *tmp;
if (value == NULL) {
PyErr_SetString(PyExc_TypeError, "Cannot delete the first attribute");
return -1;
}
if (!PyUnicode_Check(value)) {
PyErr_SetString(PyExc_TypeError,
"The first attribute value must be a string");
return -1;
}
tmp = self->first;
Py_INCREF(value);
self->first = value;
Py_DECREF(tmp);
return 0;
}
The getter function is passed a Custom object and a “closure”, which is a void pointer. In this case, the closure is
ignored. (The closure supports an advanced usage in which definition data is passed to the getter and setter. This
could, for example, be used to allow a single set of getter and setter functions that decide the attribute to get or set
based on data in the closure.)
The setter function is passed the Custom object, the new value, and the closure. The new value may be NULL, in
which case the attribute is being deleted. In our setter, we raise an error if the attribute is deleted or if its new value
is not a string.
We create an array of PyGetSetDef structures:
.tp_getset = Custom_getsetters,
The last item in a PyGetSetDef structure is the “closure” mentioned above. In this case, we aren’t using a closure,
so we just pass NULL.
We also remove the member definitions for these attributes:
We also need to update the tp_init handler to only allow strings3 to be passed:
static int
Custom_init(CustomObject *self, PyObject *args, PyObject *kwds)
{
static char *kwlist[] = {"first", "last", "number", NULL};
PyObject *first = NULL, *last = NULL, *tmp;
if (first) {
tmp = self->first;
Py_INCREF(first);
self->first = first;
Py_DECREF(tmp);
}
if (last) {
tmp = self->last;
Py_INCREF(last);
self->last = last;
Py_DECREF(tmp);
}
return 0;
}
With these changes, we can assure that the first and last members are never NULL so we can remove checks for
NULL values in almost all cases. This means that most of the Py_XDECREF() calls can be converted to Py_DECREF()
calls. The only place we can’t change these calls is in the tp_dealloc implementation, where there is the possibility
that the initialization of these members failed in tp_new.
We also rename the module initialization function and module name in the initialization function, as we did before,
and we add an extra definition to the setup.py file.
3 We now know that the first and last members are strings, so perhaps we could be less careful about decrementing their reference counts,
however, we accept instances of string subclasses. Even though deallocating normal strings won’t call back into our objects, we can’t guarantee
that deallocating an instance of a string subclass won’t call back into our objects.
>>> l = []
>>> l.append(l)
>>> del l
In this example, we create a list that contains itself. When we delete it, it still has a reference from itself. Its reference
count doesn’t drop to zero. Fortunately, Python’s cyclic garbage collector will eventually figure out that the list is
garbage and free it.
In the second version of the Custom example, we allowed any kind of object to be stored in the first or last
attributes4 . Besides, in the second and third versions, we allowed subclassing Custom, and subclasses may add
arbitrary attributes. For any of those two reasons, Custom objects can participate in cycles:
To allow a Custom instance participating in a reference cycle to be properly detected and collected by the cyclic GC,
our Custom type needs to fill two additional slots and to enable a flag that enables these slots:
#define PY_SSIZE_T_CLEAN
#include <Python.h>
#include <stddef.h> /* for offsetof() */
typedef struct {
PyObject_HEAD
PyObject *first; /* first name */
PyObject *last; /* last name */
int number;
} CustomObject;
static int
Custom_traverse(CustomObject *self, visitproc visit, void *arg)
{
Py_VISIT(self->first);
Py_VISIT(self->last);
return 0;
}
static int
Custom_clear(CustomObject *self)
{
Py_CLEAR(self->first);
Py_CLEAR(self->last);
return 0;
}
static void
Custom_dealloc(CustomObject *self)
{
PyObject_GC_UnTrack(self);
(continues on next page)
4 Also, even with our attributes restricted to strings instances, the user could pass arbitrary str subclasses and therefore still create reference
cycles.
static PyObject *
Custom_new(PyTypeObject *type, PyObject *args, PyObject *kwds)
{
CustomObject *self;
self = (CustomObject *) type->tp_alloc(type, 0);
if (self != NULL) {
self->first = PyUnicode_FromString("");
if (self->first == NULL) {
Py_DECREF(self);
return NULL;
}
self->last = PyUnicode_FromString("");
if (self->last == NULL) {
Py_DECREF(self);
return NULL;
}
self->number = 0;
}
return (PyObject *) self;
}
static int
Custom_init(CustomObject *self, PyObject *args, PyObject *kwds)
{
static char *kwlist[] = {"first", "last", "number", NULL};
PyObject *first = NULL, *last = NULL;
if (first) {
Py_SETREF(self->first, Py_NewRef(first));
}
if (last) {
Py_SETREF(self->last, Py_NewRef(last));
}
return 0;
}
static PyObject *
Custom_getfirst(CustomObject *self, void *closure)
{
return Py_NewRef(self->first);
}
(continues on next page)
static int
Custom_setfirst(CustomObject *self, PyObject *value, void *closure)
{
if (value == NULL) {
PyErr_SetString(PyExc_TypeError, "Cannot delete the first attribute");
return -1;
}
if (!PyUnicode_Check(value)) {
PyErr_SetString(PyExc_TypeError,
"The first attribute value must be a string");
return -1;
}
Py_XSETREF(self->first, Py_NewRef(value));
return 0;
}
static PyObject *
Custom_getlast(CustomObject *self, void *closure)
{
return Py_NewRef(self->last);
}
static int
Custom_setlast(CustomObject *self, PyObject *value, void *closure)
{
if (value == NULL) {
PyErr_SetString(PyExc_TypeError, "Cannot delete the last attribute");
return -1;
}
if (!PyUnicode_Check(value)) {
PyErr_SetString(PyExc_TypeError,
"The last attribute value must be a string");
return -1;
}
Py_XSETREF(self->last, Py_NewRef(value));
return 0;
}
static PyObject *
Custom_name(CustomObject *self, PyObject *Py_UNUSED(ignored))
{
return PyUnicode_FromFormat("%S %S", self->first, self->last);
}
PyMODINIT_FUNC
PyInit_custom4(void)
{
PyObject *m;
if (PyType_Ready(&CustomType) < 0)
return NULL;
m = PyModule_Create(&custommodule);
if (m == NULL)
return NULL;
return m;
}
First, the traversal method lets the cyclic GC know about subobjects that could participate in cycles:
static int
Custom_traverse(CustomObject *self, visitproc visit, void *arg)
{
int vret;
if (self->first) {
vret = visit(self->first, arg);
if (vret != 0)
(continues on next page)
For each subobject that can participate in cycles, we need to call the visit() function, which is passed to the
traversal method. The visit() function takes as arguments the subobject and the extra argument arg passed to the
traversal method. It returns an integer value that must be returned if it is non-zero.
Python provides a Py_VISIT() macro that automates calling visit functions. With Py_VISIT(), we can minimize
the amount of boilerplate in Custom_traverse:
static int
Custom_traverse(CustomObject *self, visitproc visit, void *arg)
{
Py_VISIT(self->first);
Py_VISIT(self->last);
return 0;
}
® Note
The tp_traverse implementation must name its arguments exactly visit and arg in order to use Py_VISIT().
Second, we need to provide a method for clearing any subobjects that can participate in cycles:
static int
Custom_clear(CustomObject *self)
{
Py_CLEAR(self->first);
Py_CLEAR(self->last);
return 0;
}
Notice the use of the Py_CLEAR() macro. It is the recommended and safe way to clear data attributes of arbitrary
types while decrementing their reference counts. If you were to call Py_XDECREF() instead on the attribute before
setting it to NULL, there is a possibility that the attribute’s destructor would call back into code that reads the attribute
again (especially if there is a reference cycle).
® Note
Nevertheless, it is much easier and less error-prone to always use Py_CLEAR() when deleting an attribute. Don’t
try to micro-optimize at the expense of robustness!
The deallocator Custom_dealloc may call arbitrary code when clearing attributes. It means the circular GC can be
triggered inside the function. Since the GC assumes reference count is not zero, we need to untrack the object from
the GC by calling PyObject_GC_UnTrack() before clearing members. Here is our reimplemented deallocator
using PyObject_GC_UnTrack() and Custom_clear:
static void
Custom_dealloc(CustomObject *self)
{
PyObject_GC_UnTrack(self);
Custom_clear(self);
Py_TYPE(self)->tp_free((PyObject *) self);
}
That’s pretty much it. If we had written custom tp_alloc or tp_free handlers, we’d need to modify them for
cyclic garbage collection. Most extensions will use the versions automatically provided.
#define PY_SSIZE_T_CLEAN
#include <Python.h>
typedef struct {
PyListObject list;
int state;
} SubListObject;
static PyObject *
SubList_increment(SubListObject *self, PyObject *unused)
{
self->state++;
return PyLong_FromLong(self->state);
}
static int
SubList_init(SubListObject *self, PyObject *args, PyObject *kwds)
{
if (PyList_Type.tp_init((PyObject *) self, args, kwds) < 0)
return -1;
self->state = 0;
return 0;
}
PyMODINIT_FUNC
PyInit_sublist(void)
{
PyObject *m;
SubListType.tp_base = &PyList_Type;
if (PyType_Ready(&SubListType) < 0)
return NULL;
m = PyModule_Create(&sublistmodule);
if (m == NULL)
return NULL;
Py_INCREF(&SubListType);
if (PyModule_AddObject(m, "SubList", (PyObject *) &SubListType) < 0) {
Py_DECREF(&SubListType);
Py_DECREF(m);
return NULL;
}
return m;
}
As you can see, the source code closely resembles the Custom examples in previous sections. We will break down
the main differences between them.
typedef struct {
(continues on next page)
The primary difference for derived type objects is that the base type’s object structure must be the first value. The
base type will already include the PyObject_HEAD() at the beginning of its structure.
When a Python object is a SubList instance, its PyObject * pointer can be safely cast to both PyListObject
* and SubListObject *:
static int
SubList_init(SubListObject *self, PyObject *args, PyObject *kwds)
{
if (PyList_Type.tp_init((PyObject *) self, args, kwds) < 0)
return -1;
self->state = 0;
return 0;
}
We see above how to call through to the __init__() method of the base type.
This pattern is important when writing a type with custom tp_new and tp_dealloc members. The tp_new handler
should not actually create the memory for the object with its tp_alloc, but let the base class handle it by calling its
own tp_new.
The PyTypeObject struct supports a tp_base specifying the type’s concrete base class. Due to cross-platform
compiler issues, you can’t fill that field directly with a reference to PyList_Type; it should be done later in the
module initialization function:
PyMODINIT_FUNC
PyInit_sublist(void)
{
PyObject* m;
SubListType.tp_base = &PyList_Type;
if (PyType_Ready(&SubListType) < 0)
return NULL;
m = PyModule_Create(&sublistmodule);
if (m == NULL)
return NULL;
Py_INCREF(&SubListType);
if (PyModule_AddObject(m, "SubList", (PyObject *) &SubListType) < 0) {
Py_DECREF(&SubListType);
Py_DECREF(m);
return NULL;
}
return m;
}
Before calling PyType_Ready(), the type structure must have the tp_base slot filled in. When we are deriving an
existing type, it is not necessary to fill out the tp_alloc slot with PyType_GenericNew() – the allocation function
from the base type will be inherited.
After that, calling PyType_Ready() and adding the type object to the module is the same as with the basic Custom
examples.
destructor tp_dealloc;
Py_ssize_t tp_vectorcall_offset;
getattrfunc tp_getattr;
setattrfunc tp_setattr;
PyAsyncMethods *tp_as_async; /* formerly known as tp_compare (Python 2)
or tp_reserved (Python 3) */
reprfunc tp_repr;
PyNumberMethods *tp_as_number;
PySequenceMethods *tp_as_sequence;
PyMappingMethods *tp_as_mapping;
hashfunc tp_hash;
ternaryfunc tp_call;
reprfunc tp_str;
getattrofunc tp_getattro;
setattrofunc tp_setattro;
/* Iterators */
(continues on next page)
destructor tp_finalize;
vectorcallfunc tp_vectorcall;
Now that’s a lot of methods. Don’t worry too much though – if you have a type you want to define, the chances are
very good that you will only implement a handful of these.
As you probably expect by now, we’re going to go over this and give more information about the various handlers.
We won’t go in the order they are defined in the structure, because there is a lot of historical baggage that impacts
the ordering of the fields. It’s often easiest to find an example that includes the fields you need and then change the
values to suit your new type.
The name of the type – as mentioned in the previous chapter, this will appear in various places, almost entirely for
diagnostic purposes. Try to choose something that will be helpful in such a situation!
These fields tell the runtime how much memory to allocate when new objects of this type are created. Python has
some built-in support for variable length structures (think: strings, tuples) which is where the tp_itemsize field
comes in. This will be dealt with later.
Here you can put a string (or its address) that you want returned when the Python script references obj.__doc__
to retrieve the doc string.
Now we come to the basic type methods – the ones most extension types will implement.
This function is called when the reference count of the instance of your type is reduced to zero and the Python
interpreter wants to reclaim it. If your type has memory to free or other clean-up to perform, you can put it here.
The object itself needs to be freed here as well. Here is an example of this function:
static void
newdatatype_dealloc(newdatatypeobject *obj)
{
free(obj->obj_UnderlyingDatatypePtr);
Py_TYPE(obj)->tp_free((PyObject *)obj);
}
If your type supports garbage collection, the destructor should call PyObject_GC_UnTrack() before clearing any
member fields:
static void
newdatatype_dealloc(newdatatypeobject *obj)
{
PyObject_GC_UnTrack(obj);
Py_CLEAR(obj->other_obj);
...
Py_TYPE(obj)->tp_free((PyObject *)obj);
}
One important requirement of the deallocator function is that it leaves any pending exceptions alone. This is important
since deallocators are frequently called as the interpreter unwinds the Python stack; when the stack is unwound due to
an exception (rather than normal returns), nothing is done to protect the deallocators from seeing that an exception has
already been set. Any actions which a deallocator performs which may cause additional Python code to be executed
may detect that an exception has been set. This can lead to misleading errors from the interpreter. The proper way
to protect against this is to save a pending exception before performing the unsafe action, and restoring it when done.
This can be done using the PyErr_Fetch() and PyErr_Restore() functions:
static void
my_dealloc(PyObject *obj)
{
MyObject *self = (MyObject *) obj;
PyObject *cbresult;
if (self->my_callback != NULL) {
PyObject *err_type, *err_value, *err_traceback;
cbresult = PyObject_CallNoArgs(self->my_callback);
if (cbresult == NULL)
PyErr_WriteUnraisable(self->my_callback);
else
Py_DECREF(cbresult);
® Note
There are limitations to what you can safely do in a deallocator function. First, if your type supports garbage
collection (using tp_traverse and/or tp_clear), some of the object’s members can have been cleared or
finalized by the time tp_dealloc is called. Second, in tp_dealloc, your object is in an unstable state: its
reference count is equal to zero. Any call to a non-trivial object or API (as in the example above) might end up
calling tp_dealloc again, causing a double free and a crash.
Starting with Python 3.4, it is recommended not to put any complex finalization code in tp_dealloc, and instead
use the new tp_finalize type method.
µ See also
reprfunc tp_repr;
reprfunc tp_str;
The tp_repr handler should return a string object containing a representation of the instance for which it is called.
Here is a simple example:
static PyObject *
newdatatype_repr(newdatatypeobject *obj)
{
return PyUnicode_FromFormat("Repr-ified_newdatatype{{size:%d}}",
obj->obj_UnderlyingDatatypePtr->size);
}
If no tp_repr handler is specified, the interpreter will supply a representation that uses the type’s tp_name and a
uniquely identifying value for the object.
The tp_str handler is to str() what the tp_repr handler described above is to repr(); that is, it is called when
Python code calls str() on an instance of your object. Its implementation is very similar to the tp_repr function,
but the resulting string is intended for human consumption. If tp_str is not specified, the tp_repr handler is used
instead.
Here is a simple example:
static PyObject *
newdatatype_str(newdatatypeobject *obj)
{
return PyUnicode_FromFormat("Stringified_newdatatype{{size:%d}}",
obj->obj_UnderlyingDatatypePtr->size);
}
If accessing attributes of an object is always a simple operation (this will be explained shortly), there are generic
implementations which can be used to provide the PyObject* version of the attribute management functions. The
actual need for type-specific attribute handlers almost completely disappeared starting with Python 2.2, though there
are many examples which have not been updated to use some of the new generic mechanism that is available.
If tp_methods is not NULL, it must refer to an array of PyMethodDef structures. Each entry in the table is an
instance of this structure:
One entry should be defined for each method provided by the type; no entries are needed for methods inherited from
a base type. One additional entry is needed at the end; it is a sentinel that marks the end of the array. The ml_name
field of the sentinel must be NULL.
The second table is used to define attributes which map directly to data stored in the instance. A variety of primitive
C types are supported, and access may be read-only or read-write. The structures in the table are defined as:
For each entry in the table, a descriptor will be constructed and added to the type which will be able to extract a value
from the instance structure. The type field should contain a type code like Py_T_INT or Py_T_DOUBLE; the value
will be used to determine how to convert Python values to and from C values. The flags field is used to store flags
which control how the attribute can be accessed: you can set it to Py_READONLY to prevent Python code from setting
it.
An interesting advantage of using the tp_members table to build descriptors that are used at runtime is that any
attribute defined this way can have an associated doc string simply by providing the text in the table. An application
can use the introspection API to retrieve the descriptor from the class object, and get the doc string using its __doc__
attribute.
As with the tp_methods table, a sentinel entry with a ml_name value of NULL is required.
static PyObject *
newdatatype_getattr(newdatatypeobject *obj, char *name)
{
if (strcmp(name, "data") == 0)
{
return PyLong_FromLong(obj->data);
}
PyErr_Format(PyExc_AttributeError,
"'%.100s' object has no attribute '%.400s'",
Py_TYPE(obj)->tp_name, name);
return NULL;
}
The tp_setattr handler is called when the __setattr__() or __delattr__() method of a class instance
would be called. When an attribute should be deleted, the third parameter will be NULL. Here is an example that
simply raises an exception; if this were really all you wanted, the tp_setattr handler should be set to NULL.
static int
newdatatype_setattr(newdatatypeobject *obj, char *name, PyObject *v)
{
PyErr_Format(PyExc_RuntimeError, "Read-only attribute: %s", name);
return -1;
}
The tp_richcompare handler is called when comparisons are needed. It is analogous to the rich comparison
methods, like __lt__(), and also called by PyObject_RichCompare() and PyObject_RichCompareBool().
This function is called with two Python objects and the operator as arguments, where the operator is one of Py_EQ,
Py_NE, Py_LE, Py_GE, Py_LT or Py_GT. It should compare the two objects with respect to the specified operator and
return Py_True or Py_False if the comparison is successful, Py_NotImplemented to indicate that comparison
is not implemented and the other object’s comparison method should be tried, or NULL if an exception was set.
Here is a sample implementation, for a datatype that is considered equal if the size of an internal pointer is equal:
static PyObject *
newdatatype_richcmp(newdatatypeobject *obj1, newdatatypeobject *obj2, int op)
{
PyObject *result;
int c, size1, size2;
size1 = obj1->obj_UnderlyingDatatypePtr->size;
size2 = obj2->obj_UnderlyingDatatypePtr->size;
switch (op) {
case Py_LT: c = size1 < size2; break;
case Py_LE: c = size1 <= size2; break;
case Py_EQ: c = size1 == size2; break;
case Py_NE: c = size1 != size2; break;
case Py_GT: c = size1 > size2; break;
case Py_GE: c = size1 >= size2; break;
}
result = c ? Py_True : Py_False;
Py_INCREF(result);
return result;
}
PyNumberMethods *tp_as_number;
PySequenceMethods *tp_as_sequence;
PyMappingMethods *tp_as_mapping;
If you wish your object to be able to act like a number, a sequence, or a mapping object, then you place the address
of a structure that implements the C type PyNumberMethods, PySequenceMethods, or PyMappingMethods,
respectively. It is up to you to fill in this structure with appropriate values. You can find examples of the use of each
of these in the Objects directory of the Python source distribution.
hashfunc tp_hash;
This function, if you choose to provide it, should return a hash number for an instance of your data type. Here is a
simple example:
static Py_hash_t
newdatatype_hash(newdatatypeobject *obj)
{
Py_hash_t result;
result = obj->some_size + 32767 * obj->some_number;
if (result == -1)
result = -2;
return result;
}
Py_hash_t is a signed integer type with a platform-varying width. Returning -1 from tp_hash indicates an error,
which is why you should be careful to avoid returning it when hash computation is successful, as seen above.
ternaryfunc tp_call;
This function is called when an instance of your data type is “called”, for example, if obj1 is an instance of your data
type and the Python script contains obj1('hello'), the tp_call handler is invoked.
This function takes three arguments:
1. self is the instance of the data type which is the subject of the call. If the call is obj1('hello'), then self is
obj1.
2. args is a tuple containing the arguments to the call. You can use PyArg_ParseTuple() to extract the argu-
ments.
3. kwds is a dictionary of keyword arguments that were passed. If this is non-NULL and you support keyword
arguments, use PyArg_ParseTupleAndKeywords() to extract the arguments. If you do not want to support
keyword arguments and this is non-NULL, raise a TypeError with a message saying that keyword arguments
are not supported.
Here is a toy tp_call implementation:
static PyObject *
newdatatype_call(newdatatypeobject *obj, PyObject *args, PyObject *kwds)
{
PyObject *result;
const char *arg1;
const char *arg2;
const char *arg3;
/* Iterators */
getiterfunc tp_iter;
iternextfunc tp_iternext;
These functions provide support for the iterator protocol. Both handlers take exactly one parameter, the instance for
which they are being called, and return a new reference. In the case of an error, they should set an exception and
return NULL. tp_iter corresponds to the Python __iter__() method, while tp_iternext corresponds to the
Python __next__() method.
Any iterable object must implement the tp_iter handler, which must return an iterator object. Here the same
guidelines apply as for Python classes:
• For collections (such as lists and tuples) which can support multiple independent iterators, a new iterator should
be created and returned by each call to tp_iter.
• Objects which can only be iterated over once (usually due to side effects of iteration, such as file objects) can
implement tp_iter by returning a new reference to themselves – and should also therefore implement the
tp_iternext handler.
Any iterator object should implement both tp_iter and tp_iternext. An iterator’s tp_iter handler should
return a new reference to the iterator. Its tp_iternext handler should return a new reference to the next object in
the iteration, if there is one. If the iteration has reached the end, tp_iternext may return NULL without setting
an exception, or it may set StopIteration in addition to returning NULL; avoiding the exception can yield slightly
better performance. If an actual error occurs, tp_iternext should always set an exception and return NULL.
µ See also
For an object to be weakly referenceable, the extension type must set the Py_TPFLAGS_MANAGED_WEAKREF bit of
the tp_flags field. The legacy tp_weaklistoffset field should be left as zero.
Concretely, here is how the statically declared type object would look:
static PyTypeObject TrivialType = {
PyVarObject_HEAD_INIT(NULL, 0)
/* ... other members omitted for brevity ... */
.tp_flags = Py_TPFLAGS_MANAGED_WEAKREF | ...,
};
The only further addition is that tp_dealloc needs to clear any weak references (by calling
PyObject_ClearWeakRefs()):
static void
Trivial_dealloc(TrivialObject *self)
{
/* Clear weakrefs first before calling any destructors */
PyObject_ClearWeakRefs((PyObject *) self);
/* ... remainder of destruction code omitted for brevity ... */
Py_TYPE(self)->tp_free((PyObject *) self);
}
When you need to verify that an object is a concrete instance of the type you are implementing, use the
PyObject_TypeCheck() function. A sample of its use might be something like the following:
if (!PyObject_TypeCheck(some_object, &MyType)) {
PyErr_SetString(PyExc_TypeError, "arg #1 not a mything");
return NULL;
}
µ See also
It returns either a fully initialized module, or a PyModuleDef instance. See initializing-modules for details.
For modules with ASCII-only names, the function must be named PyInit_<modulename>, with <modulename>
replaced by the name of the module. When using multi-phase-initialization, non-ASCII module names are allowed.
In this case, the initialization function name is PyInitU_<modulename>, with <modulename> encoded using
Python’s punycode encoding with hyphens replaced by underscores. In Python:
def initfunc_name(name):
try:
suffix = b'_' + name.encode('ascii')
except UnicodeEncodeError:
suffix = b'U_' + name.encode('punycode').replace(b'-', b'_')
return b'PyInit' + suffix
It is possible to export multiple modules from a single shared library by defining multiple initialization functions.
However, importing them requires using symbolic links or a custom importer, because by default only the function
corresponding to the filename is found. See the “Multiple modules in one library” section in PEP 489 for details.
Module authors are encouraged to use the distutils approach for building extension modules, instead of the one
described in this section. You will still need the C compiler that was used to build Python; typically Microsoft Visual
C++.
® Note
This chapter mentions a number of filenames that include an encoded Python version number. These filenames
are represented with the version number shown as XY; in practice, 'X' will be the major version number and
'Y' will be the minor version number of the Python release you’re working with. For example, if you are using
Python 2.2.1, XY will actually be 22.
The first command created three files: spam.obj, spam.dll and spam.lib. Spam.dll does not contain any
Python functions (such as PyArg_ParseTuple()), but it does know how to find the Python code thanks to
pythonXY.lib.
The second command created ni.dll (and .obj and .lib), which knows how to find the necessary functions from
spam, and also from the Python executable.
Not every identifier is exported to the lookup table. If you want any other modules (including Python) to be
able to see your identifiers, you have to say _declspec(dllexport), as in void _declspec(dllexport)
initspam(void) or PyObject _declspec(dllexport) *NiGetSpamData(void).
Developer Studio will throw in a lot of import libraries that you do not really need, adding about 100K to your
executable. To get rid of them, use the Project Settings dialog, Link tab, to specify ignore default libraries. Add the
correct msvcrtxx.lib to the list of libraries.
THREE
Sometimes, rather than creating an extension that runs inside the Python interpreter as the main application, it is
desirable to instead embed the CPython runtime inside a larger application. This section covers some of the details
involved in doing that successfully.
µ See also
c-api-index
The details of Python’s C interface are given in this manual. A great deal of necessary information can be
found here.
#define PY_SSIZE_T_CLEAN
#include <Python.h>
59
Extending and Embedding Python, Release 3.12.7
The Py_SetProgramName() function should be called before Py_Initialize() to inform the interpreter about
paths to Python run-time libraries. Next, the Python interpreter is initialized with Py_Initialize(), followed by
the execution of a hard-coded Python script that prints the date and time. Afterwards, the Py_FinalizeEx() call
shuts the interpreter down, followed by the end of the program. In a real program, you may want to get the Python
script from another source, perhaps a text-editor routine, a file, or a database. Getting the Python code from a file
can better be done by using the PyRun_SimpleFile() function, which saves you the trouble of allocating memory
space and loading the file contents.
#define PY_SSIZE_T_CLEAN
#include <Python.h>
int
main(int argc, char *argv[])
{
PyObject *pName, *pModule, *pFunc;
PyObject *pArgs, *pValue;
int i;
if (argc < 3) {
fprintf(stderr,"Usage: call pythonfile funcname [args]\n");
return 1;
}
Py_Initialize();
pName = PyUnicode_DecodeFSDefault(argv[1]);
/* Error checking of pName left out */
pModule = PyImport_Import(pName);
Py_DECREF(pName);
if (pModule != NULL) {
pFunc = PyObject_GetAttrString(pModule, argv[2]);
/* pFunc is a new reference */
This code loads a Python script using argv[1], and calls the function named in argv[2]. Its integer arguments are
the other values of the argv array. If you compile and link this program (let’s call the finished executable call), and
use it to execute a Python script, such as:
def multiply(a,b):
print("Will compute", a, "times", b)
c = 0
for i in range(0, a):
c = c + b
return c
Although the program is quite large for its functionality, most of the code is for data conversion between Python and
C, and for error reporting. The interesting part with respect to embedding Python starts with
Py_Initialize();
pName = PyUnicode_DecodeFSDefault(argv[1]);
/* Error checking of pName left out */
pModule = PyImport_Import(pName);
After initializing the interpreter, the script is loaded using PyImport_Import(). This routine needs a Python string
as its argument, which is constructed using the PyUnicode_FromString() data conversion routine.
Once the script is loaded, the name we’re looking for is retrieved using PyObject_GetAttrString(). If the name
exists, and the object returned is callable, you can safely assume that it is a function. The program then proceeds by
constructing a tuple of arguments as normal. The call to the Python function is then made with:
Upon return of the function, pValue is either NULL or it contains a reference to the return value of the function. Be
sure to release the reference after examining the value.
static PyObject*
PyInit_emb(void)
{
return PyModule_Create(&EmbModule);
}
Insert the above code just above the main() function. Also, insert the following two statements before the call to
Py_Initialize():
numargs = argc;
PyImport_AppendInittab("emb", &PyInit_emb);
These two lines initialize the numargs variable, and make the emb.numargs() function accessible to the embedded
Python interpreter. With these extensions, the Python script can do things like
import emb
print("Number of arguments", emb.numargs())
In a real application, the methods will expose an API of the application to Python.
$ /opt/bin/python3.11-config --cflags
-I/opt/include/python3.11 -I/opt/include/python3.11 -Wsign-compare -DNDEBUG -
,→g -fwrapv -O3 -Wall
• pythonX.Y-config --ldflags --embed will give you the recommended flags when linking:
® Note
To avoid confusion between several Python installations (and especially between the system Python and your own
compiled Python), it is recommended that you use the absolute path to pythonX.Y-config, as in the above
example.
If this procedure doesn’t work for you (it is not guaranteed to work for all Unix-like platforms; however, we welcome
bug reports) you will have to read your system’s documentation about dynamic linking and/or examine Python’s
Makefile (use sysconfig.get_makefile_filename() to find its location) and compilation options. In this
case, the sysconfig module is a useful tool to programmatically extract the configuration values that you will want
to combine together. For example:
GLOSSARY
>>>
The default Python prompt of the interactive shell. Often seen for code examples which can be executed
interactively in the interpreter.
...
Can refer to:
• The default Python prompt of the interactive shell when entering the code for an indented code block,
when within a pair of matching left and right delimiters (parentheses, square brackets, curly braces or
triple quotes), or after specifying a decorator.
• The Ellipsis built-in constant.
2to3
A tool that tries to convert Python 2.x code to Python 3.x code by handling most of the incompatibilities which
can be detected by parsing the source and traversing the parse tree.
2to3 is available in the standard library as lib2to3; a standalone entry point is provided as Tools/scripts/
2to3. See 2to3-reference.
abstract base class
Abstract base classes complement duck-typing by providing a way to define interfaces when other techniques
like hasattr() would be clumsy or subtly wrong (for example with magic methods). ABCs introduce virtual
subclasses, which are classes that don’t inherit from a class but are still recognized by isinstance() and
issubclass(); see the abc module documentation. Python comes with many built-in ABCs for data struc-
tures (in the collections.abc module), numbers (in the numbers module), streams (in the io module),
import finders and loaders (in the importlib.abc module). You can create your own ABCs with the abc
module.
annotation
A label associated with a variable, a class attribute or a function parameter or return value, used by convention
as a type hint.
Annotations of local variables cannot be accessed at runtime, but annotations of global variables, class at-
tributes, and functions are stored in the __annotations__ special attribute of modules, classes, and func-
tions, respectively.
See variable annotation, function annotation, PEP 484 and PEP 526, which describe this functionality. Also
see annotations-howto for best practices on working with annotations.
argument
A value passed to a function (or method) when calling the function. There are two kinds of argument:
• keyword argument: an argument preceded by an identifier (e.g. name=) in a function call or passed as a
value in a dictionary preceded by **. For example, 3 and 5 are both keyword arguments in the following
calls to complex():
complex(real=3, imag=5)
complex(**{'real': 3, 'imag': 5})
65
Extending and Embedding Python, Release 3.12.7
• positional argument: an argument that is not a keyword argument. Positional arguments can appear at the
beginning of an argument list and/or be passed as elements of an iterable preceded by *. For example, 3
and 5 are both positional arguments in the following calls:
complex(3, 5)
complex(*(3, 5))
Arguments are assigned to the named local variables in a function body. See the calls section for the rules
governing this assignment. Syntactically, any expression can be used to represent an argument; the evaluated
value is assigned to the local variable.
See also the parameter glossary entry, the FAQ question on the difference between arguments and parameters,
and PEP 362.
asynchronous context manager
An object which controls the environment seen in an async with statement by defining __aenter__() and
__aexit__() methods. Introduced by PEP 492.
asynchronous generator
A function which returns an asynchronous generator iterator. It looks like a coroutine function defined with
async def except that it contains yield expressions for producing a series of values usable in an async
for loop.
Usually refers to an asynchronous generator function, but may refer to an asynchronous generator iterator in
some contexts. In cases where the intended meaning isn’t clear, using the full terms avoids ambiguity.
An asynchronous generator function may contain await expressions as well as async for, and async with
statements.
asynchronous generator iterator
An object created by a asynchronous generator function.
This is an asynchronous iterator which when called using the __anext__() method returns an awaitable object
which will execute the body of the asynchronous generator function until the next yield expression.
Each yield temporarily suspends processing, remembering the location execution state (including local vari-
ables and pending try-statements). When the asynchronous generator iterator effectively resumes with another
awaitable returned by __anext__(), it picks up where it left off. See PEP 492 and PEP 525.
asynchronous iterable
An object, that can be used in an async for statement. Must return an asynchronous iterator from its
__aiter__() method. Introduced by PEP 492.
asynchronous iterator
An object that implements the __aiter__() and __anext__() methods. __anext__() must return an
awaitable object. async for resolves the awaitables returned by an asynchronous iterator’s __anext__()
method until it raises a StopAsyncIteration exception. Introduced by PEP 492.
attribute
A value associated with an object which is usually referenced by name using dotted expressions. For example,
if an object o has an attribute a it would be referenced as o.a.
It is possible to give an object an attribute whose name is not an identifier as defined by identifiers, for example
using setattr(), if the object allows it. Such an attribute will not be accessible using a dotted expression,
and would instead need to be retrieved with getattr().
awaitable
An object that can be used in an await expression. Can be a coroutine or an object with an __await__()
method. See also PEP 492.
BDFL
Benevolent Dictator For Life, a.k.a. Guido van Rossum, Python’s creator.
binary file
A file object able to read and write bytes-like objects. Examples of binary files are files opened in binary mode
66 Appendix A. Glossary
Extending and Embedding Python, Release 3.12.7
See also text file for a file object able to read and write str objects.
borrowed reference
In Python’s C API, a borrowed reference is a reference to an object, where the code using the object does not
own the reference. It becomes a dangling pointer if the object is destroyed. For example, a garbage collection
can remove the last strong reference to the object and so destroy it.
Calling Py_INCREF() on the borrowed reference is recommended to convert it to a strong reference in-place,
except when the object cannot be destroyed before the last usage of the borrowed reference. The Py_NewRef()
function can be used to create a new strong reference.
bytes-like object
An object that supports the bufferobjects and can export a C-contiguous buffer. This includes all bytes,
bytearray, and array.array objects, as well as many common memoryview objects. Bytes-like objects
can be used for various operations that work with binary data; these include compression, saving to a binary
file, and sending over a socket.
Some operations need the binary data to be mutable. The documentation often refers to these as “read-write
bytes-like objects”. Example mutable buffer objects include bytearray and a memoryview of a bytearray.
Other operations require the binary data to be stored in immutable objects (“read-only bytes-like objects”);
examples of these include bytes and a memoryview of a bytes object.
bytecode
Python source code is compiled into bytecode, the internal representation of a Python program in the CPython
interpreter. The bytecode is also cached in .pyc files so that executing the same file is faster the second time
(recompilation from source to bytecode can be avoided). This “intermediate language” is said to run on a
virtual machine that executes the machine code corresponding to each bytecode. Do note that bytecodes are
not expected to work between different Python virtual machines, nor to be stable between Python releases.
A list of bytecode instructions can be found in the documentation for the dis module.
callable
A callable is an object that can be called, possibly with a set of arguments (see argument), with the following
syntax:
A function, and by extension a method, is a callable. An instance of a class that implements the __call__()
method is also a callable.
callback
A subroutine function which is passed as an argument to be executed at some point in the future.
class
A template for creating user-defined objects. Class definitions normally contain method definitions which
operate on instances of the class.
class variable
A variable defined in a class and intended to be modified only at class level (i.e., not in an instance of the class).
complex number
An extension of the familiar real number system in which all numbers are expressed as a sum of a real part and
an imaginary part. Imaginary numbers are real multiples of the imaginary unit (the square root of -1), often
written i in mathematics or j in engineering. Python has built-in support for complex numbers, which are
written with this latter notation; the imaginary part is written with a j suffix, e.g., 3+1j. To get access to com-
plex equivalents of the math module, use cmath. Use of complex numbers is a fairly advanced mathematical
feature. If you’re not aware of a need for them, it’s almost certain you can safely ignore them.
context manager
An object which controls the environment seen in a with statement by defining __enter__() and
__exit__() methods. See PEP 343.
67
Extending and Embedding Python, Release 3.12.7
context variable
A variable which can have different values depending on its context. This is similar to Thread-Local Storage in
which each execution thread may have a different value for a variable. However, with context variables, there
may be several contexts in one execution thread and the main usage for context variables is to keep track of
variables in concurrent asynchronous tasks. See contextvars.
contiguous
A buffer is considered contiguous exactly if it is either C-contiguous or Fortran contiguous. Zero-dimensional
buffers are C and Fortran contiguous. In one-dimensional arrays, the items must be laid out in memory next
to each other, in order of increasing indexes starting from zero. In multidimensional C-contiguous arrays, the
last index varies the fastest when visiting items in order of memory address. However, in Fortran contiguous
arrays, the first index varies the fastest.
coroutine
Coroutines are a more generalized form of subroutines. Subroutines are entered at one point and exited at
another point. Coroutines can be entered, exited, and resumed at many different points. They can be imple-
mented with the async def statement. See also PEP 492.
coroutine function
A function which returns a coroutine object. A coroutine function may be defined with the async def state-
ment, and may contain await, async for, and async with keywords. These were introduced by PEP
492.
CPython
The canonical implementation of the Python programming language, as distributed on python.org. The term
“CPython” is used when necessary to distinguish this implementation from others such as Jython or IronPython.
decorator
A function returning another function, usually applied as a function transformation using the @wrapper syntax.
Common examples for decorators are classmethod() and staticmethod().
The decorator syntax is merely syntactic sugar, the following two function definitions are semantically equiv-
alent:
def f(arg):
...
f = staticmethod(f)
@staticmethod
def f(arg):
...
The same concept exists for classes, but is less commonly used there. See the documentation for function
definitions and class definitions for more about decorators.
descriptor
Any object which defines the methods __get__(), __set__(), or __delete__(). When a class attribute
is a descriptor, its special binding behavior is triggered upon attribute lookup. Normally, using a.b to get,
set or delete an attribute looks up the object named b in the class dictionary for a, but if b is a descriptor,
the respective descriptor method gets called. Understanding descriptors is a key to a deep understanding of
Python because they are the basis for many features including functions, methods, properties, class methods,
static methods, and reference to super classes.
For more information about descriptors’ methods, see descriptors or the Descriptor How To Guide.
dictionary
An associative array, where arbitrary keys are mapped to values. The keys can be any object with __hash__()
and __eq__() methods. Called a hash in Perl.
dictionary comprehension
A compact way to process all or part of the elements in an iterable and return a dictionary with the re-
sults. results = {n: n ** 2 for n in range(10)} generates a dictionary containing key n mapped
to value n ** 2. See comprehensions.
68 Appendix A. Glossary
Extending and Embedding Python, Release 3.12.7
dictionary view
The objects returned from dict.keys(), dict.values(), and dict.items() are called dictionary views.
They provide a dynamic view on the dictionary’s entries, which means that when the dictionary changes, the
view reflects these changes. To force the dictionary view to become a full list use list(dictview). See
dict-views.
docstring
A string literal which appears as the first expression in a class, function or module. While ignored when the
suite is executed, it is recognized by the compiler and put into the __doc__ attribute of the enclosing class,
function or module. Since it is available via introspection, it is the canonical place for documentation of the
object.
duck-typing
A programming style which does not look at an object’s type to determine if it has the right interface; instead,
the method or attribute is simply called or used (“If it looks like a duck and quacks like a duck, it must be
a duck.”) By emphasizing interfaces rather than specific types, well-designed code improves its flexibility
by allowing polymorphic substitution. Duck-typing avoids tests using type() or isinstance(). (Note,
however, that duck-typing can be complemented with abstract base classes.) Instead, it typically employs
hasattr() tests or EAFP programming.
EAFP
Easier to ask for forgiveness than permission. This common Python coding style assumes the existence of
valid keys or attributes and catches exceptions if the assumption proves false. This clean and fast style is
characterized by the presence of many try and except statements. The technique contrasts with the LBYL
style common to many other languages such as C.
expression
A piece of syntax which can be evaluated to some value. In other words, an expression is an accumulation of
expression elements like literals, names, attribute access, operators or function calls which all return a value. In
contrast to many other languages, not all language constructs are expressions. There are also statements which
cannot be used as expressions, such as while. Assignments are also statements, not expressions.
extension module
A module written in C or C++, using Python’s C API to interact with the core and with user code.
f-string
String literals prefixed with 'f' or 'F' are commonly called “f-strings” which is short for formatted string
literals. See also PEP 498.
file object
An object exposing a file-oriented API (with methods such as read() or write()) to an underlying resource.
Depending on the way it was created, a file object can mediate access to a real on-disk file or to another type of
storage or communication device (for example standard input/output, in-memory buffers, sockets, pipes, etc.).
File objects are also called file-like objects or streams.
There are actually three categories of file objects: raw binary files, buffered binary files and text files. Their
interfaces are defined in the io module. The canonical way to create a file object is by using the open()
function.
file-like object
A synonym for file object.
filesystem encoding and error handler
Encoding and error handler used by Python to decode bytes from the operating system and encode Unicode to
the operating system.
The filesystem encoding must guarantee to successfully decode all bytes below 128. If the file system encoding
fails to provide this guarantee, API functions can raise UnicodeError.
The sys.getfilesystemencoding() and sys.getfilesystemencodeerrors() functions can be
used to get the filesystem encoding and error handler.
The filesystem encoding and error handler are configured at Python startup by the PyConfig_Read() func-
tion: see filesystem_encoding and filesystem_errors members of PyConfig.
69
Extending and Embedding Python, Release 3.12.7
function
A series of statements which returns some value to a caller. It can also be passed zero or more arguments which
may be used in the execution of the body. See also parameter, method, and the function section.
function annotation
An annotation of a function parameter or return value.
Function annotations are usually used for type hints: for example, this function is expected to take two int
arguments and is also expected to have an int return value:
garbage collection
The process of freeing memory when it is not used anymore. Python performs garbage collection via reference
counting and a cyclic garbage collector that is able to detect and break reference cycles. The garbage collector
can be controlled using the gc module.
generator
A function which returns a generator iterator. It looks like a normal function except that it contains yield
expressions for producing a series of values usable in a for-loop or that can be retrieved one at a time with the
next() function.
Usually refers to a generator function, but may refer to a generator iterator in some contexts. In cases where
the intended meaning isn’t clear, using the full terms avoids ambiguity.
generator iterator
An object created by a generator function.
Each yield temporarily suspends processing, remembering the location execution state (including local vari-
ables and pending try-statements). When the generator iterator resumes, it picks up where it left off (in contrast
to functions which start fresh on every invocation).
generator expression
An expression that returns an iterator. It looks like a normal expression followed by a for clause defining a
70 Appendix A. Glossary
Extending and Embedding Python, Release 3.12.7
loop variable, range, and an optional if clause. The combined expression generates values for an enclosing
function:
generic function
A function composed of multiple functions implementing the same operation for different types. Which im-
plementation should be used during a call is determined by the dispatch algorithm.
See also the single dispatch glossary entry, the functools.singledispatch() decorator, and PEP 443.
generic type
A type that can be parameterized; typically a container class such as list or dict. Used for type hints and
annotations.
For more details, see generic alias types, PEP 483, PEP 484, PEP 585, and the typing module.
GIL
See global interpreter lock.
global interpreter lock
The mechanism used by the CPython interpreter to assure that only one thread executes Python bytecode at
a time. This simplifies the CPython implementation by making the object model (including critical built-in
types such as dict) implicitly safe against concurrent access. Locking the entire interpreter makes it easier
for the interpreter to be multi-threaded, at the expense of much of the parallelism afforded by multi-processor
machines.
However, some extension modules, either standard or third-party, are designed so as to release the GIL when
doing computationally intensive tasks such as compression or hashing. Also, the GIL is always released when
doing I/O.
Past efforts to create a “free-threaded” interpreter (one which locks shared data at a much finer granularity)
have not been successful because performance suffered in the common single-processor case. It is believed
that overcoming this performance issue would make the implementation much more complicated and therefore
costlier to maintain.
hash-based pyc
A bytecode cache file that uses the hash rather than the last-modified time of the corresponding source file to
determine its validity. See pyc-invalidation.
hashable
An object is hashable if it has a hash value which never changes during its lifetime (it needs a __hash__()
method), and can be compared to other objects (it needs an __eq__() method). Hashable objects which
compare equal must have the same hash value.
Hashability makes an object usable as a dictionary key and a set member, because these data structures use the
hash value internally.
Most of Python’s immutable built-in objects are hashable; mutable containers (such as lists or dictionaries)
are not; immutable containers (such as tuples and frozensets) are only hashable if their elements are hashable.
Objects which are instances of user-defined classes are hashable by default. They all compare unequal (except
with themselves), and their hash value is derived from their id().
IDLE
An Integrated Development and Learning Environment for Python. idle is a basic editor and interpreter envi-
ronment which ships with the standard distribution of Python.
immortal
Immortal objects are a CPython implementation detail introduced in PEP 683.
If an object is immortal, its reference count is never modified, and therefore it is never deallocated while the
interpreter is running. For example, True and None are immortal in CPython.
immutable
An object with a fixed value. Immutable objects include numbers, strings and tuples. Such an object cannot
71
Extending and Embedding Python, Release 3.12.7
be altered. A new object has to be created if a different value has to be stored. They play an important role in
places where a constant hash value is needed, for example as a key in a dictionary.
import path
A list of locations (or path entries) that are searched by the path based finder for modules to import. During
import, this list of locations usually comes from sys.path, but for subpackages it may also come from the
parent package’s __path__ attribute.
importing
The process by which Python code in one module is made available to Python code in another module.
importer
An object that both finds and loads a module; both a finder and loader object.
interactive
Python has an interactive interpreter which means you can enter statements and expressions at the interpreter
prompt, immediately execute them and see their results. Just launch python with no arguments (possibly
by selecting it from your computer’s main menu). It is a very powerful way to test out new ideas or inspect
modules and packages (remember help(x)).
interpreted
Python is an interpreted language, as opposed to a compiled one, though the distinction can be blurry because
of the presence of the bytecode compiler. This means that source files can be run directly without explicitly
creating an executable which is then run. Interpreted languages typically have a shorter development/debug
cycle than compiled ones, though their programs generally also run more slowly. See also interactive.
interpreter shutdown
When asked to shut down, the Python interpreter enters a special phase where it gradually releases all allocated
resources, such as modules and various critical internal structures. It also makes several calls to the garbage
collector. This can trigger the execution of code in user-defined destructors or weakref callbacks. Code exe-
cuted during the shutdown phase can encounter various exceptions as the resources it relies on may not function
anymore (common examples are library modules or the warnings machinery).
The main reason for interpreter shutdown is that the __main__ module or the script being run has finished
executing.
iterable
An object capable of returning its members one at a time. Examples of iterables include all sequence types
(such as list, str, and tuple) and some non-sequence types like dict, file objects, and objects of any
classes you define with an __iter__() method or with a __getitem__() method that implements sequence
semantics.
Iterables can be used in a for loop and in many other places where a sequence is needed (zip(), map(),
…). When an iterable object is passed as an argument to the built-in function iter(), it returns an iterator
for the object. This iterator is good for one pass over the set of values. When using iterables, it is usually not
necessary to call iter() or deal with iterator objects yourself. The for statement does that automatically for
you, creating a temporary unnamed variable to hold the iterator for the duration of the loop. See also iterator,
sequence, and generator.
iterator
An object representing a stream of data. Repeated calls to the iterator’s __next__() method (or passing
it to the built-in function next()) return successive items in the stream. When no more data are available a
StopIteration exception is raised instead. At this point, the iterator object is exhausted and any further calls
to its __next__() method just raise StopIteration again. Iterators are required to have an __iter__()
method that returns the iterator object itself so every iterator is also iterable and may be used in most places
where other iterables are accepted. One notable exception is code which attempts multiple iteration passes. A
container object (such as a list) produces a fresh new iterator each time you pass it to the iter() function
or use it in a for loop. Attempting this with an iterator will just return the same exhausted iterator object used
in the previous iteration pass, making it appear like an empty container.
More information can be found in typeiter.
CPython implementation detail: CPython does not consistently apply the requirement that an iterator define
__iter__().
72 Appendix A. Glossary
Extending and Embedding Python, Release 3.12.7
key function
A key function or collation function is a callable that returns a value used for sorting or ordering. For example,
locale.strxfrm() is used to produce a sort key that is aware of locale specific sort conventions.
A number of tools in Python accept key functions to control how elements are ordered or grouped. They
include min(), max(), sorted(), list.sort(), heapq.merge(), heapq.nsmallest(), heapq.
nlargest(), and itertools.groupby().
There are several ways to create a key function. For example. the str.lower() method can serve as a
key function for case insensitive sorts. Alternatively, a key function can be built from a lambda expression
such as lambda r: (r[0], r[2]). Also, operator.attrgetter(), operator.itemgetter(), and
operator.methodcaller() are three key function constructors. See the Sorting HOW TO for examples
of how to create and use key functions.
keyword argument
See argument.
lambda
An anonymous inline function consisting of a single expression which is evaluated when the function is called.
The syntax to create a lambda function is lambda [parameters]: expression
LBYL
Look before you leap. This coding style explicitly tests for pre-conditions before making calls or lookups. This
style contrasts with the EAFP approach and is characterized by the presence of many if statements.
In a multi-threaded environment, the LBYL approach can risk introducing a race condition between “the
looking” and “the leaping”. For example, the code, if key in mapping: return mapping[key] can
fail if another thread removes key from mapping after the test, but before the lookup. This issue can be solved
with locks or by using the EAFP approach.
list
A built-in Python sequence. Despite its name it is more akin to an array in other languages than to a linked list
since access to elements is O(1).
list comprehension
A compact way to process all or part of the elements in a sequence and return a list with the results. result
= ['{:#04x}'.format(x) for x in range(256) if x % 2 == 0] generates a list of strings con-
taining even hex numbers (0x..) in the range from 0 to 255. The if clause is optional. If omitted, all elements
in range(256) are processed.
loader
An object that loads a module. It must define a method named load_module(). A loader is typically returned
by a finder. See also:
• finders-and-loaders
• importlib.abc.Loader
• PEP 302
locale encoding
On Unix, it is the encoding of the LC_CTYPE locale. It can be set with locale.setlocale(locale.
LC_CTYPE, new_locale).
73
Extending and Embedding Python, Release 3.12.7
Some named tuples are built-in types (such as the above examples). Alternatively, a named tuple can be
created from a regular class definition that inherits from tuple and that defines named fields. Such a class
can be written by hand, or it can be created by inheriting typing.NamedTuple, or with the factory function
collections.namedtuple(). The latter techniques also add some extra methods that may not be found
in hand-written or built-in named tuples.
74 Appendix A. Glossary
Extending and Embedding Python, Release 3.12.7
namespace
The place where a variable is stored. Namespaces are implemented as dictionaries. There are the local,
global and built-in namespaces as well as nested namespaces in objects (in methods). Namespaces support
modularity by preventing naming conflicts. For instance, the functions builtins.open and os.open() are
distinguished by their namespaces. Namespaces also aid readability and maintainability by making it clear
which module implements a function. For instance, writing random.seed() or itertools.islice()
makes it clear that those functions are implemented by the random and itertools modules, respectively.
namespace package
A PEP 420 package which serves only as a container for subpackages. Namespace packages may have no
physical representation, and specifically are not like a regular package because they have no __init__.py
file.
See also module.
nested scope
The ability to refer to a variable in an enclosing definition. For instance, a function defined inside another
function can refer to variables in the outer function. Note that nested scopes by default work only for reference
and not for assignment. Local variables both read and write in the innermost scope. Likewise, global variables
read and write to the global namespace. The nonlocal allows writing to outer scopes.
new-style class
Old name for the flavor of classes now used for all class objects. In earlier Python versions, only
new-style classes could use Python’s newer, versatile features like __slots__, descriptors, properties,
__getattribute__(), class methods, and static methods.
object
Any data with state (attributes or value) and defined behavior (methods). Also the ultimate base class of any
new-style class.
package
A Python module which can contain submodules or recursively, subpackages. Technically, a package is a
Python module with a __path__ attribute.
See also regular package and namespace package.
parameter
A named entity in a function (or method) definition that specifies an argument (or in some cases, arguments)
that the function can accept. There are five kinds of parameter:
• positional-or-keyword: specifies an argument that can be passed either positionally or as a keyword argu-
ment. This is the default kind of parameter, for example foo and bar in the following:
• positional-only: specifies an argument that can be supplied only by position. Positional-only parameters
can be defined by including a / character in the parameter list of the function definition after them, for
example posonly1 and posonly2 in the following:
• keyword-only: specifies an argument that can be supplied only by keyword. Keyword-only parameters
can be defined by including a single var-positional parameter or bare * in the parameter list of the function
definition before them, for example kw_only1 and kw_only2 in the following:
• var-positional: specifies that an arbitrary sequence of positional arguments can be provided (in addition
to any positional arguments already accepted by other parameters). Such a parameter can be defined by
prepending the parameter name with *, for example args in the following:
75
Extending and Embedding Python, Release 3.12.7
• var-keyword: specifies that arbitrarily many keyword arguments can be provided (in addition to any key-
word arguments already accepted by other parameters). Such a parameter can be defined by prepending
the parameter name with **, for example kwargs in the example above.
Parameters can specify both optional and required arguments, as well as default values for some optional
arguments.
See also the argument glossary entry, the FAQ question on the difference between arguments and parameters,
the inspect.Parameter class, the function section, and PEP 362.
path entry
A single location on the import path which the path based finder consults to find modules for importing.
path entry finder
A finder returned by a callable on sys.path_hooks (i.e. a path entry hook) which knows how to locate
modules given a path entry.
See importlib.abc.PathEntryFinder for the methods that path entry finders implement.
path entry hook
A callable on the sys.path_hooks list which returns a path entry finder if it knows how to find modules on
a specific path entry.
path based finder
One of the default meta path finders which searches an import path for modules.
path-like object
An object representing a file system path. A path-like object is either a str or bytes object representing
a path, or an object implementing the os.PathLike protocol. An object that supports the os.PathLike
protocol can be converted to a str or bytes file system path by calling the os.fspath() function; os.
fsdecode() and os.fsencode() can be used to guarantee a str or bytes result instead, respectively.
Introduced by PEP 519.
PEP
Python Enhancement Proposal. A PEP is a design document providing information to the Python community,
or describing a new feature for Python or its processes or environment. PEPs should provide a concise technical
specification and a rationale for proposed features.
PEPs are intended to be the primary mechanisms for proposing major new features, for collecting community
input on an issue, and for documenting the design decisions that have gone into Python. The PEP author is
responsible for building consensus within the community and documenting dissenting opinions.
See PEP 1.
portion
A set of files in a single directory (possibly stored in a zip file) that contribute to a namespace package, as
defined in PEP 420.
positional argument
See argument.
provisional API
A provisional API is one which has been deliberately excluded from the standard library’s backwards com-
patibility guarantees. While major changes to such interfaces are not expected, as long as they are marked
provisional, backwards incompatible changes (up to and including removal of the interface) may occur if
deemed necessary by core developers. Such changes will not be made gratuitously – they will occur only if
serious fundamental flaws are uncovered that were missed prior to the inclusion of the API.
Even for provisional APIs, backwards incompatible changes are seen as a “solution of last resort” - every
attempt will still be made to find a backwards compatible resolution to any identified problems.
This process allows the standard library to continue to evolve over time, without locking in problematic design
errors for extended periods of time. See PEP 411 for more details.
provisional package
See provisional API.
76 Appendix A. Glossary
Extending and Embedding Python, Release 3.12.7
Python 3000
Nickname for the Python 3.x release line (coined long ago when the release of version 3 was something in the
distant future.) This is also abbreviated “Py3k”.
Pythonic
An idea or piece of code which closely follows the most common idioms of the Python language, rather than
implementing code using concepts common to other languages. For example, a common idiom in Python is
to loop over all elements of an iterable using a for statement. Many other languages don’t have this type of
construct, so people unfamiliar with Python sometimes use a numerical counter instead:
for i in range(len(food)):
print(food[i])
qualified name
A dotted name showing the “path” from a module’s global scope to a class, function or method defined in that
module, as defined in PEP 3155. For top-level functions and classes, the qualified name is the same as the
object’s name:
>>> class C:
... class D:
... def meth(self):
... pass
...
>>> C.__qualname__
'C'
>>> C.D.__qualname__
'C.D'
>>> C.D.meth.__qualname__
'C.D.meth'
When used to refer to modules, the fully qualified name means the entire dotted path to the module, including
any parent packages, e.g. email.mime.text:
reference count
The number of references to an object. When the reference count of an object drops to zero, it is deallocated.
Some objects are “immortal” and have reference counts that are never modified, and therefore the objects are
never deallocated. Reference counting is generally not visible to Python code, but it is a key element of the
CPython implementation. Programmers can call the sys.getrefcount() function to return the reference
count for a particular object.
regular package
A traditional package, such as a directory containing an __init__.py file.
See also namespace package.
__slots__
A declaration inside a class that saves memory by pre-declaring space for instance attributes and eliminating
instance dictionaries. Though popular, the technique is somewhat tricky to get right and is best reserved for
rare cases where there are large numbers of instances in a memory-critical application.
sequence
An iterable which supports efficient element access using integer indices via the __getitem__() special
77
Extending and Embedding Python, Release 3.12.7
method and defines a __len__() method that returns the length of the sequence. Some built-in sequence
types are list, str, tuple, and bytes. Note that dict also supports __getitem__() and __len__(),
but is considered a mapping rather than a sequence because the lookups use arbitrary hashable keys rather
than integers.
The collections.abc.Sequence abstract base class defines a much richer interface that goes beyond just
__getitem__() and __len__(), adding count(), index(), __contains__(), and __reversed__().
Types that implement this expanded interface can be registered explicitly using register(). For more
documentation on sequence methods generally, see Common Sequence Operations.
set comprehension
A compact way to process all or part of the elements in an iterable and return a set with the results. results
= {c for c in 'abracadabra' if c not in 'abc'} generates the set of strings {'r', 'd'}. See
comprehensions.
single dispatch
A form of generic function dispatch where the implementation is chosen based on the type of a single argument.
slice
An object usually containing a portion of a sequence. A slice is created using the subscript notation, [] with
colons between numbers when several are given, such as in variable_name[1:3:5]. The bracket (sub-
script) notation uses slice objects internally.
soft deprecated
A soft deprecated API should not be used in new code, but it is safe for already existing code to use it. The
API remains documented and tested, but will not be enhanced further.
Soft deprecation, unlike normal deprecation, does not plan on removing the API and will not emit warnings.
See PEP 387: Soft Deprecation.
special method
A method that is called implicitly by Python to execute a certain operation on a type, such as addition. Such
methods have names starting and ending with double underscores. Special methods are documented in spe-
cialnames.
statement
A statement is part of a suite (a “block” of code). A statement is either an expression or one of several constructs
with a keyword, such as if, while or for.
static type checker
An external tool that reads Python code and analyzes it, looking for issues such as incorrect types. See also
type hints and the typing module.
strong reference
In Python’s C API, a strong reference is a reference to an object which is owned by the code holding the
reference. The strong reference is taken by calling Py_INCREF() when the reference is created and released
with Py_DECREF() when the reference is deleted.
The Py_NewRef() function can be used to create a strong reference to an object. Usually, the Py_DECREF()
function must be called on the strong reference before exiting the scope of the strong reference, to avoid leaking
one reference.
See also borrowed reference.
text encoding
A string in Python is a sequence of Unicode code points (in range U+0000–U+10FFFF). To store or transfer
a string, it needs to be serialized as a sequence of bytes.
Serializing a string into a sequence of bytes is known as “encoding”, and recreating the string from the sequence
of bytes is known as “decoding”.
There are a variety of different text serialization codecs, which are collectively referred to as “text encodings”.
text file
A file object able to read and write str objects. Often, a text file actually accesses a byte-oriented datastream
78 Appendix A. Glossary
Extending and Embedding Python, Release 3.12.7
and handles the text encoding automatically. Examples of text files are files opened in text mode ('r' or 'w'),
sys.stdin, sys.stdout, and instances of io.StringIO.
See also binary file for a file object able to read and write bytes-like objects.
triple-quoted string
A string which is bound by three instances of either a quotation mark (”) or an apostrophe (‘). While they don’t
provide any functionality not available with single-quoted strings, they are useful for a number of reasons.
They allow you to include unescaped single and double quotes within a string and they can span multiple lines
without the use of the continuation character, making them especially useful when writing docstrings.
type
The type of a Python object determines what kind of object it is; every object has a type. An object’s type is
accessible as its __class__ attribute or can be retrieved with type(obj).
type alias
A synonym for a type, created by assigning the type to an identifier.
Type aliases are useful for simplifying type hints. For example:
def remove_gray_shades(
colors: list[tuple[int, int, int]]) -> list[tuple[int, int, int]]:
pass
class C:
field: 'annotation'
Variable annotations are usually used for type hints: for example this variable is expected to take int values:
count: int = 0
79
Extending and Embedding Python, Release 3.12.7
virtual environment
A cooperatively isolated runtime environment that allows Python users and applications to install and upgrade
Python distribution packages without interfering with the behaviour of other Python applications running on
the same system.
See also venv.
virtual machine
A computer defined entirely in software. Python’s virtual machine executes the bytecode emitted by the byte-
code compiler.
Zen of Python
Listing of Python design principles and philosophies that are helpful in understanding and using the language.
The listing can be found by typing “import this” at the interactive prompt.
80 Appendix A. Glossary
APPENDIX
These documents are generated from reStructuredText sources by Sphinx, a document processor specifically written
for the Python documentation.
Development of the documentation and its toolchain is an entirely volunteer effort, just like Python itself. If you
want to contribute, please take a look at the reporting-bugs page for information on how to do so. New volunteers
are always welcome!
Many thanks go to:
• Fred L. Drake, Jr., the creator of the original Python documentation toolset and writer of much of the content;
• the Docutils project for creating reStructuredText and the Docutils suite;
• Fredrik Lundh for his Alternative Python Reference project from which Sphinx got many good ideas.
81
Extending and Embedding Python, Release 3.12.7
® Note
GPL-compatible doesn’t mean that we’re distributing Python under the GPL. All Python licenses, unlike the GPL,
let you distribute a modified version without making your changes open source. The GPL-compatible licenses
make it possible to combine Python with other software that is released under the GPL; the others don’t.
Thanks to the many outside volunteers who have worked under Guido’s direction to make these releases possible.
83
Extending and Embedding Python, Release 3.12.7
2. Subject to the terms and conditions of this License Agreement, PSF hereby
grants Licensee a nonexclusive, royalty-free, world-wide license to reproduce,
analyze, test, perform and/or display publicly, prepare derivative works,
distribute, and otherwise use Python 3.12.7 alone or in any derivative
version, provided, however, that PSF's License Agreement and PSF's notice of
copyright, i.e., "Copyright © 2001-2023 Python Software Foundation; All Rights
Reserved" are retained in Python 3.12.7 alone or in any derivative version
prepared by Licensee.
5. PSF SHALL NOT BE LIABLE TO LICENSEE OR ANY OTHER USERS OF PYTHON 3.12.7
FOR ANY INCIDENTAL, SPECIAL, OR CONSEQUENTIAL DAMAGES OR LOSS AS A RESULT OF
MODIFYING, DISTRIBUTING, OR OTHERWISE USING PYTHON 3.12.7, OR ANY DERIVATIVE
THEREOF, EVEN IF ADVISED OF THE POSSIBILITY THEREOF.
2. Subject to the terms and conditions of this BeOpen Python License Agreement,
BeOpen hereby grants Licensee a non-exclusive, royalty-free, world-wide license
to reproduce, analyze, test, perform and/or display publicly, prepare derivative
works, distribute, and otherwise use the Software alone or in any derivative
version, provided, however, that the BeOpen Python License is retained in the
Software, alone or in any derivative version prepared by Licensee.
4. BEOPEN SHALL NOT BE LIABLE TO LICENSEE OR ANY OTHER USERS OF THE SOFTWARE FOR
ANY INCIDENTAL, SPECIAL, OR CONSEQUENTIAL DAMAGES OR LOSS AS A RESULT OF USING,
MODIFYING OR DISTRIBUTING THE SOFTWARE, OR ANY DERIVATIVE THEREOF, EVEN IF
ADVISED OF THE POSSIBILITY THEREOF.
2. Subject to the terms and conditions of this License Agreement, CNRI hereby
grants Licensee a nonexclusive, royalty-free, world-wide license to reproduce,
analyze, test, perform and/or display publicly, prepare derivative works,
distribute, and otherwise use Python 1.6.1 alone or in any derivative version,
provided, however, that CNRI's License Agreement and CNRI's notice of copyright,
i.e., "Copyright © 1995-2001 Corporation for National Research Initiatives; All
(continues on next page)
4. CNRI is making Python 1.6.1 available to Licensee on an "AS IS" basis. CNRI
MAKES NO REPRESENTATIONS OR WARRANTIES, EXPRESS OR IMPLIED. BY WAY OF EXAMPLE,
BUT NOT LIMITATION, CNRI MAKES NO AND DISCLAIMS ANY REPRESENTATION OR WARRANTY
OF MERCHANTABILITY OR FITNESS FOR ANY PARTICULAR PURPOSE OR THAT THE USE OF
PYTHON 1.6.1 WILL NOT INFRINGE ANY THIRD PARTY RIGHTS.
5. CNRI SHALL NOT BE LIABLE TO LICENSEE OR ANY OTHER USERS OF PYTHON 1.6.1 FOR
ANY INCIDENTAL, SPECIAL, OR CONSEQUENTIAL DAMAGES OR LOSS AS A RESULT OF
MODIFYING, DISTRIBUTING, OR OTHERWISE USING PYTHON 1.6.1, OR ANY DERIVATIVE
THEREOF, EVEN IF ADVISED OF THE POSSIBILITY THEREOF.
Permission to use, copy, modify, and distribute this software and its
documentation for any purpose and without fee is hereby granted, provided that
the above copyright notice appear in all copies and that both that copyright
(continues on next page)
C.2.5 ZERO-CLAUSE BSD LICENSE FOR CODE IN THE PYTHON 3.12.7 DOCU-
MENTATION
Permission to use, copy, modify, and/or distribute this software for any
purpose with or without fee is hereby granted.
THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES WITH
REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY
AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY SPECIAL, DIRECT,
INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM
LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR
OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR
PERFORMANCE OF THIS SOFTWARE.
C.3.2 Sockets
The socket module uses the functions, getaddrinfo(), and getnameinfo(), which are coded in separate source
files from the WIDE Project, https://www.wide.ad.jp/.
THIS SOFTWARE IS PROVIDED BY THE PROJECT AND CONTRIBUTORS ``AS IS'' AND
ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
ARE DISCLAIMED. IN NO EVENT SHALL THE PROJECT OR CONTRIBUTORS BE LIABLE
FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
SUCH DAMAGE.
Permission to use, copy, modify, and distribute this Python software and
its associated documentation for any purpose without fee is hereby
granted, provided that the above copyright notice appears in all copies,
and that both that copyright notice and this permission notice appear in
supporting documentation, and that the name of neither Automatrix,
Bioreason or Mojam Media be used in advertising or publicity pertaining to
distribution of the software without specific, written prior permission.
SECRET LABS AB AND THE AUTHOR DISCLAIMS ALL WARRANTIES WITH REGARD
TO THIS SOFTWARE, INCLUDING ALL IMPLIED WARRANTIES OF MERCHANT-
ABILITY AND FITNESS. IN NO EVENT SHALL SECRET LABS AB OR THE AUTHOR
BE LIABLE FOR ANY SPECIAL, INDIRECT OR CONSEQUENTIAL DAMAGES OR ANY
DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS,
WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS
ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE
OF THIS SOFTWARE.
C.3.8 test_epoll
The test.test_epoll module contains the following notice:
Copyright (c) 2000 Doug White, 2006 James Knight, 2007 Christian Heimes
All rights reserved.
THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND
ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE
FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
SUCH DAMAGE.
C.3.10 SipHash24
The file Python/pyhash.c contains Marek Majkowski’ implementation of Dan Bernstein’s SipHash24 algorithm.
It contains the following note:
<MIT License>
Copyright (c) 2013 Marek Majkowski <marek@popcount.org>
The above copyright notice and this permission notice shall be included in
all copies or substantial portions of the Software.
</MIT License>
Original location:
https://github.com/majek/csiphash/
/****************************************************************
*
* The author of this software is David M. Gay.
*
* Copyright (c) 1991, 2000, 2001 by Lucent Technologies.
*
* Permission to use, copy, modify, and distribute this software for any
* purpose without fee is hereby granted, provided that this entire notice
* is included in all copies of any software which is or includes a copy
* or modification of this software and in all copies of the supporting
* documentation for such software.
*
* THIS SOFTWARE IS BEING PROVIDED "AS IS", WITHOUT ANY EXPRESS OR IMPLIED
* WARRANTY. IN PARTICULAR, NEITHER THE AUTHOR NOR LUCENT MAKES ANY
* REPRESENTATION OR WARRANTY OF ANY KIND CONCERNING THE MERCHANTABILITY
* OF THIS SOFTWARE OR ITS FITNESS FOR ANY PARTICULAR PURPOSE.
*
***************************************************************/
C.3.12 OpenSSL
The modules hashlib, posix, ssl, crypt use the OpenSSL library for added performance if made available by the
operating system. Additionally, the Windows and macOS installers for Python may include a copy of the OpenSSL
libraries, so we include a copy of the OpenSSL license here. For the OpenSSL 3.0 release, and later releases derived
from that, the Apache License v2 applies:
Apache License
Version 2.0, January 2004
https://www.apache.org/licenses/
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
C.3.13 expat
The pyexpat extension is built using an included copy of the expat sources unless the build is configured
--with-system-expat:
Copyright (c) 1998, 1999, 2000 Thai Open Source Software Center Ltd
and Clark Cooper
C.3.14 libffi
The _ctypes C extension underlying the ctypes module is built using an included copy of the libffi sources unless
the build is configured --with-system-libffi:
Copyright (c) 1996-2008 Red Hat, Inc and others.
The above copyright notice and this permission notice shall be included
in all copies or substantial portions of the Software.
C.3.15 zlib
The zlib extension is built using an included copy of the zlib sources if the zlib version found on the system is too
old to be used for the build:
1. The origin of this software must not be misrepresented; you must not
claim that you wrote the original software. If you use this software
in a product, an acknowledgment in the product documentation would be
(continues on next page)
2. Altered source versions must be plainly marked as such, and must not be
misrepresented as being the original software.
3. This notice may not be removed or altered from any source distribution.
C.3.16 cfuhash
The implementation of the hash table used by the tracemalloc is based on the cfuhash project:
Copyright (c) 2005 Don Owens
All rights reserved.
C.3.17 libmpdec
The _decimal C extension underlying the decimal module is built using an included copy of the libmpdec library
unless the build is configured --with-system-libmpdec:
Copyright (c) 2008-2020 Stefan Krah. All rights reserved.
THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS "AS IS" AND
ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE
FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
SUCH DAMAGE.
C.3.19 Audioop
The audioop module uses the code base in g771.c file of the SoX project. https://sourceforge.net/projects/sox/files/
sox/12.17.7/sox-12.17.7.tar.gz
This source code is a product of Sun Microsystems, Inc. and is provided for unrestricted use. Users may
copy or modify this source code without charge.
SUN SOURCE CODE IS PROVIDED AS IS WITH NO WARRANTIES OF ANY KIND INCLUD-
ING THE WARRANTIES OF DESIGN, MERCHANTIBILITY AND FITNESS FOR A PARTICU-
LAR PURPOSE, OR ARISING FROM A COURSE OF DEALING, USAGE OR TRADE PRAC-
TICE.
Sun source code is provided with no support and without any obligation on the part of Sun Microsystems,
Inc. to assist in its use, correction, modification or enhancement.
SUN MICROSYSTEMS, INC. SHALL HAVE NO LIABILITY WITH RESPECT TO THE IN-
FRINGEMENT OF COPYRIGHTS, TRADE SECRETS OR ANY PATENTS BY THIS SOFTWARE
OR ANY PART THEREOF.
In no event will Sun Microsystems, Inc. be liable for any lost revenue or profits or other special, indirect
and consequential damages, even if Sun has been advised of the possibility of such damages.
Sun Microsystems, Inc. 2550 Garcia Avenue Mountain View, California 94043
C.3.20 asyncio
Parts of the asyncio module are incorporated from uvloop 0.16, which is distributed under the MIT license:
COPYRIGHT
See History and License for complete license and permissions information.
101
Extending and Embedding Python, Release 3.12.7
Non-alphabetical dictionary, 68
..., 65 dictionary comprehension, 68
2to3, 65 dictionary view, 69
>>>, 65 docstring, 69
__future__, 70 duck-typing, 69
__slots__, 77
E
A EAFP, 69
abstract base class, 65 environment variable
annotation, 65 PYTHONPATH, 56
argument, 65 expression, 69
asynchronous context manager, 66 extension module, 69
asynchronous generator, 66
asynchronous generator iterator, 66 F
asynchronous iterable, 66 f-string, 69
asynchronous iterator, 66 file object, 69
attribute, 66 file-like object, 69
awaitable, 66 filesystem encoding and error handler, 69
finalization, of objects, 49
B finder, 70
BDFL, 66 floor division, 70
binary file, 66 Fortran contiguous, 68
borrowed reference, 67 function, 70
built-in function function annotation, 70
repr, 50
bytecode, 67 G
bytes-like object, 67 garbage collection, 70
generator, 70
C generator expression, 70
callable, 67 generator iterator, 70
callback, 67 generic function, 71
C-contiguous, 68 generic type, 71
class, 67 GIL, 71
class variable, 67 global interpreter lock, 71
complex number, 67
context manager, 67 H
context variable, 68 hash-based pyc, 71
contiguous, 68 hashable, 71
coroutine, 68
coroutine function, 68 I
CPython, 68 IDLE, 71
immortal, 71
D immutable, 71
deallocation, object, 49 import path, 72
decorator, 68 importer, 72
descriptor, 68 importing, 72
103
Extending and Embedding Python, Release 3.12.7
104 Index
Extending and Embedding Python, Release 3.12.7
T
text encoding, 78
text file, 78
triple-quoted string, 79
type, 79
type alias, 79
type hint, 79
U
universal newlines, 79
V
variable annotation, 79
virtual environment, 80
virtual machine, 80
Z
Zen of Python, 80
Index 105