Developer Manual
Developer Manual
Manual
Contents
Milestones 1
Version Numbers 1
Current State 1
Setting up the Development Environment for Nuitka 2
Visual Studio Code 2
Eclipse / PyCharm 2
Commit and Code Hygiene 2
Coding Rules Python 3
Tool to format 3
Identifiers 3
Classes 3
Functions 3
Module/Package Names 4
Context Managers 4
Prefer list contractions over built-ins 4
Coding Rules C 5
The "git flow" model 5
Nuitka "git/github" Workflow 6
API Documentation and Guidelines 7
Use of Standard Python __doc__ Strings 7
Special doxygen Anatomy of __doc__ 7
Checking the Source 8
Running the Tests 8
Running all Tests 8
Basic Tests 10
Syntax Tests 11
Program Tests 11
Generated Tests 11
Compile Nuitka with Nuitka 11
Internal/Plugin API 11
Working with the CPython suites 12
Resetting CPython suites 12
Added new CPython suites 12
Design Descriptions 13
Nuitka Logo 13
Choice of the Target Language 13
Use of Scons internally 14
Locating Modules and Packages 16
Hooking for module import process 16
Supporting __class__ of Python3 17
Frame Stack 18
Parameter Parsing 19
Input 19
Keyword dictionary 19
Argument tuple 19
SSA form for Nuitka 19
Loop SSA 21
Python Slots in Optimization 21
Basic Slot Idea 21
Representation in Nuitka 22
The C side 23
Built-in call optimization 24
Code Generation towards C 24
Exceptions 25
Statement Temporary Variables 25
Local Variables Storage 25
Exit Targets 25
Frames 25
Abortive Statements 25
Constant Preparation 26
Language Conversions to make things simpler 26
The assert statement 26
The "comparison chain" expressions 26
The execfile built-in 27
Generator expressions with yield 28
Function Decorators 28
Functions nested arguments 28
In-place Assignments 29
Complex Assignments 29
Unpacking Assignments 29
With Statements 30
For Loops 31
While Loops 32
Exception Handlers 33
Statement try/except with else 34
Class Creation (Python2) 34
Class Creation (Python3) 35
Generator Expressions 36
List Contractions 37
Set Contractions 37
Dictionary Contractions 37
Boolean expressions and and or 38
Simple Calls 38
Complex Calls 38
Assignment Expressions 39
Match Statements 40
Print Statements 40
Reformulations during Optimization 42
Builtin zip for Python2
Mixed Types 55
Back to "ctypes" 56
Now to the interface 56
Discussing with examples 58
Code Generation Impact 58
Initial Implementation 59
Goal 1 (Reached) 59
Goal 2 (Reached) 60
Goal 3 60
Goal 4 61
Limitations for now 62
How to make Features Experimental 63
Command Line 63
In C code 63
In Python 63
When to use it 64
When to remove it 64
Adding dependencies to Nuitka 64
Adding a Run Time Dependency 64
Adding a Development Dependency 65
Idea Bin 65
Prongs of Action 67
Builtin optimization 67
Class Creation Overhead Reduction 68
Memory Usage at Compile Time 68
Coverage Testing 68
Python3 Performance 68
Caching of Python level compilation 68
Updates for this Manual 68
The purpose of this Developer Manual is to present the current design of Nuitka, the project rules, and the
motivations for choices made. It is intended to be a guide to the source code, and to give explanations that
don't fit into the source code in comments form.
It should be used as a reference for the process of planning and documenting decisions we made.
Therefore we are e.g. presenting here the type inference plans before implementing them. And we update
them as we proceed.
It grows out of discussions and presentations made at conferences as well as private conversations or
issue tracker.
Milestones
1. Feature parity with CPython, understand all the language construct and behave absolutely
compatible.
Feature parity has been reached for CPython 2.6 and 2.7. We do not target any older CPython
release. For CPython 3.3 up to 3.8 it also has been reached. We do not target the older and
practically unused CPython 3.0 to 3.2 releases.
This milestone was reached. Dropping support for Python 2.6 and 3.3 is an option, should this prove
to be any benefit. Currently it is not, as it extends the test coverage only.
2. Create the most efficient native code from this. This means to be fast with the basic Python object
handling.
This milestone was reached, although of course, micro optimizations to this are happening all the
time.
3. Then do constant propagation, determine as many values and useful constraints as possible at
compile time and create more efficient code.
This milestone is considered almost reached. We continue to discover new things, but the
infrastructure is there, and these are easy to add.
4. Type inference, detect and special case the handling of strings, integers, lists in the program.
"Nuitka Developer Manual - Setting up the Development Environment for Nuitka"
• Types are always PyObject *, and only a few C types, e.g. nuitka_bool and nuitka_void and
more are coming. Even for objects, often it's know that things are e.g. really a PyTupleObject **,
but no C type is available for that yet.
• There are a some specific use of types beyond "compile time constant", that are encoded in type and
value shapes, which can be used to predict some operations, conditions, etc. if they raise, and result
types they give.
• In code generation, the supported C types are used, and sometimes we have specialized code
generation, e.g. a binary operation that takes an int and a float and produces a float value.
There will be fallbacks to less specific types.
The expansion with more C types is currently in progress, and there will also be alternative C types, where
e.g. PyObject * and C long are in an enum that indicates which value is valid, and where special code
will be available that can avoid creating the PyObject ** unless the later overflows.
Eclipse / PyCharm
Don't use these anymore, we consider Visual Studio Code to be far superior for delivering a nice out of the
box environment.
"Nuitka Developer Manual - page 2 - Setting up the Development Environment for Nuitka"
In order to set up hooks, you need to execute these commands:
# Where python is the one you use with Nuitka, this then gets all
# development requirements, can be full PATH.
python -m pip install -r requirements-devel.txt
python ./misc/install-git-hooks.py
These commands will make sure that the autoformat-nuitka-source is run on every staged file
content at the time you do the commit. For C files, it may complain unavailability of clang-format, follow
it's advice. You may call the above tool at all times, without arguments to format call Nuitka source code.
Should you encounter problems with applying the changes to the checked out file, you can always execute
it with COMMIT_UNCHECKED=1 environment set.
Tool to format
There is a tool bin/autoformat-nuitka-source which is to apply automatic formatting to code as
much as possible. It uses black (internally) for consistent code formatting. The imports are sorted with
isort for proper order.
The tool (mostly black and isort) encodes all formatting rules, and makes the decisions for us. The
idea being that we can focus on actual code and do not have to care as much about other things. It also
deals with Windows new lines, trailing space, etc. and even sorts PyLint disable statements.
Identifiers
Classes
Classes are camel case with leading upper case. Functions and methods are with leading verb in lower
case, but also camel case. Variables and arguments are lower case with _ as a separator.
class SomeClass:
def doSomething(some_parameter
"Nuitka Developer Manual - Prefer list contractions over built-ins"
Here, setLoopContinueTarget will be so well known that the reader is expected to know the argument
names and their meaning, but it would be still better to add them. But in this instance, the variable name
already indicates that it is.
Module/Package Names
Normal modules are named in camel case with leading upper case, because of their role as singleton
classes. The difference between a module and a class is small enough and in the source code they are
also used similarly.
For the packages, no real code is allowed in their __init__.py and they must be lower case, like e.g.
nuitka or codegen. This is to distinguish them from the modules.
Packages shall only be used to group things. In nuitka.code_generation the code generation
packages are located, while the main interface is nuitka.code_generation.CodeGeneration and
may then use most of the entries as local imports.
There is no code in packages themselves. For programs, we use __main__ package to carry the actual
code.
Names of modules should be plurals if they contain classes. Example is that a Nodes module that
contains a Node class.
Context Managers
Names for context manages start with with
In order to easily recognize that something is to be used as a context manager, we follow a pattern of
naming them withSomething, to make that easily recognized.
class A:
def getX(self):
return 1
x = property(getX)
class B(A):
def getX(self):
return 2
A().x == 1 # True
B().x == 1 # True (!)
This pretty much is what makes properties bad. One would hope B().x to be 2, but instead it's not
changed. Because of the way properties take the functions and not members, and because they then are
not part of the class, they cannot be overloaded without re-declaring them.
Overloading is then not aÅ¶Ñæ¤Ëô7ÌçîqvõT•éׯz2‹xr…bfcÝt all obvious anymore. Now imagine having a setter and
getter. How to update the property easily?
So, that's not likable about them. And then we are also for clarity in these internal APIs too. Properties try
and hide the fact that code needs to run and may do things. So let's not use them.
For an external API you may exactly want to hide things, but internally that has no use, and in Nuitka,
every API is internal API. One exception may be the hints module, which will gladly use such tricks for an
easier write syntax.
Coding Rules C
For the static C parts, e.g. compiled types, helper codes, the clang-format from LLVM project is used,
the tool autoformat-nuitka-source does this for us.
We always have blocks for conditional statements to avoid typical mistakes made by adding a statement to
a branch, forgetting to make it a block.
• Create a Branch
If you are having merge conflicts while doing the previous step, then check out (DON'T FORGET TO
SAVE YOUR CHANGES FIRST IF ANY):
<https://stackoverflow.com/questions/1125968/how-do-i-force-git-pull-to-overwrite-local-files>
• In case you have an existing branch rebase it to develop
Fix the merge conflicts if any and continue or skip commit if it is not your. Sometimes for important
bug fixes, develop history gets rewritten. In that case, old and new commits will conflict during your
rebase, and skipping is the best way to go.
• Making changes
Notes:
It does one or the other indispensable things based on some parameters
and proudly returns a dictionary.
Args:
p1: parameter one
p2: parameter two
Kwargs:
kw1: keyword one
kw2: keyword two
Returns:
A dictionary calculated from the input.
Raises:
ValueError, IndexError
Examples:
>>> foo(1, 2, kw1=3, kw2=4)
{'a': 4, 'b': 6}
"""
./bin/check-nuitka-with-pylint
The above command is expected to give no warnings. It is also run on our CI and we will not merge
branches that do not pass.
./tests/run-tests
You will only run the CPython test suites, if you have the submodules of the Nuitka git repository checked
out. Otherwise, these will be skipped with a warning that they are not available.
The policy is generally, that ./test/run-tests running and passing all the tests on Linux and Windows
shall be considered sufficient for a release, but of course, depending on changes going on, that might have
to be expanded.
Basic Tests
You can run the "basic" tests like this:
./tests/basics/run_all.py search
These tests normally give sufficient coverage to assume that a change is correct, if these "basic" tests
pass. The most important constructs and built-ins are exercised.
To control the Python version used for testing, you can set the PYTHON environment variable to e.g.
python3.5 (can also be full path), or simply execute the run_all.py script directly with the intended
version, as it is portable across all supported Python versions, and defaults testing with the Python version
is run with.
Syntax Tests
Then there are "syntax" tests, i.e. language constructs that need to give a syntax error.
It sometimes so happens that Nuitka must do this itself, because the ast.parse doesn't see the problem
and raises no SyntaxError of its own. These cases are then covered by tests to make sure they work as
expected.
Using the global statement on a function argument is an example of this. These tests make sure that the
errors of Nuitka and CPython are totally the same for this:
./tests/syntax/run_all.py search
Program Tests
Then there are small "programs" tests, that e.g. exercise many kinds of import tricks and are designed to
reveal problems with inter-module behavior. These can be run like this:
./tests/programs/run_all.py search
Generated Tests
There are tests, which are generated from Jinja2 templates. They aim at e.g. combining at types with
operations, in-place or not, or large constants. These can be run like this:
./tests/generated/run_all.py search
./tests/reflected/compile_itself.py
Internal/Plugin API
The documentation from the source code for both the Python and the C parts are published as Nuitka API
and arguably in a relatively bad shape as we started generating those with Doxygen only relatively late.
Improvements have already been implemented for plugins: The plugin base class defined in
PluginBase.py (which is used as a template for all plugins) is fully documented in Doxygen now. The
same is true for the recently added standard plugins NumpyPlugin.py and TkinterPlugin.py. These
will be uploaded very soon.
Going forward, this will also happen for the remaining standard plugins.
Please find here a detailed description of how to write your own plugin.
To learn about plugin option specification consult this document.
git submodule foreach 'git fetch && git checkout $(basename $(pwd)) && \
git reset --hard origin/$(basename $(pwd))'
# Now cherry-pick all commits of test support, these disable network, audio, GUI, random filenames and more
# and are crucial for deterministic outputs and non-reliance on outside stuff.
git log --reverse origin/CPython310 --oneline -- test/support/__init__.py | tail -n +2 | cut -d' ' -f1 | xargs git cherry-pick
git push
Design Descriptions
These should be a lot more and contain graphics from presentations given. It will be filled in, but not now.
Nuitka Logo
The logo was submitted by "dr. Equivalent". It's source is contained in doc/Logo where 3 variants of the
logo in SVG are placed.
.. image:: doc/images/Nuitka-Logo-Symbol.png
:alt: Nuitka Logo
.. image:: doc/images/Nuitka-Logo-Horizontal.png
:alt: Nuitka Logo
.. image:: doc/images/Nuitka-Logo-Vertical.png
:alt: Nuitka Logo
From these logos, PNG images, and "favicons", and are derived.
The exact ImageMagick commands are in nuitka/tools/release/Documentation, but are not
executed each time, the commands are also replicated here:
Note
When we speak of "standalone" mode, this is handled outside of Scons, and after it, creating the
".dist" folder. This is done in nuitka.MainControl module.
For interfacing to Scons, there is the module nuitka.build.SconsInterface that will support calling
scons - potentially from one of two inline copies (one for before / one for Python 3.5 or later). These are
mainly used on Windows or when using source releases - and passing arguments to it. These arguments
are passed as key=value, and decoded in the scons file of Nuitka.
The scons file is named SingleExe.scons for lack of better name. It's really wrong now, but we have yet
to find a better name. It once expressed the intention to be used to create executables, but the same
works for modules too, as in terms of building, and to Scons, things really are the same.
The scons file supports operation in multiple modes for many things, and modules is just one of them. It
runs outside of Nuitka process scope, even with a different Python version potentially, so all the
information must be passed on the command line.
What follows is the (lengthy) list of arguments that the scons file processes:
• source_dir
Where is the generated C source code. Scons will just compile everything it finds there. No list of files
is passed, but instead this directory is being scanned.
• nuitka_src
Where do the include files and static C parts of Nuitka live. These provide e.g. the implementation of
compiled function, generators, and other helper codes, this will point to where nuitka.build
package lives normally.
• module_mode
Build a module instead of a program.
• abiflags
The flags needed for the Python ABI chosen. Might be necessary to find the folders for Python
installations on some systems.
• icon_path
The icon to use for Windows programs if given.
Note
Of course, it would make sense to compile time detect which module it is that is being imported and
then to make it directly. At this time, we don't have this inter-module optimization yet, mid-term it
should become easy to add.
• The use of the super variable name triggers the addition of a closure variable __class__, as can
be witnessed by the following code:
class X:
def f1(self):
print(locals())
def f2(self):
print(locals())
super # Just using the name, not even calling it.
x = X()
x.f1()
x.f2()
Output is:
• Under Python3, usage of __class__ as a reference in a child function body is mandatory. It remains
that way until all variable names have been resolved.
• When recognizing calls to super without arguments, make the arguments
into variable reference to __class__ and potentially self (actually first argument name).
• After all variables have been known, and no suspicious unresolved calls to anything named super
are down, then unused references are optimized away by the normal unused closure variable.
• Class dictionary definitions are added.
These are special direct function calls, ready to propagate also "bases" and "metaclass" values,
which need to be calculated outside.
The function bodies used for classes will automatically store __class__ as a shared local variable, if
anything uses it. And if it's not assigned by user code, it doesn't show up in the "locals()" used for
dictionary creation.
Existing __class__ local variable values are in fact provided as closure, and overridden with the
built class , but they should be used for the closure giving, before the class is finished.
So __class__ will be local variable of the class body, until the class is built, then it will be the
__class__ itself.
Frame Stack
In Python, every function, class, and module has a frame. It is created when the scope is entered, and
there is a stack of these at run time, which becomes visible in tracebacks in case of exceptions.
The choice of Nuitka is to make this an explicit element of the node tree, that are as such subject to
optimization. In cases, where they are not needed, they may be removed.
Consider the following code.
def f():
if someNotRaisingCall():
return somePotentiallyRaisingCall()
else:
return None
In this example, the frame is not needed for all the code, because the condition checked wouldn't possibly
raise at all. The idea is the make the frame guard explicit and then to reduce its scope whenever possible.
So we start out with code like this one:
def f():
with frame_guard("f"):
if someNotRaisingCall():
return somePotentiallyRaisingCall()
else:
return None
def f():
if someNotRaisingCall():
with frame_guard("f"):
return somePotentiallyRaisingCall()
else:
return None
Notice how the frame guard taking is limited and may be avoided, or in best cases, it might be removed
completely. Also this will play a role when in-lining function. The frame stack entry will then be
automatically preserved without extra care.
Note
In the actual code, nuitka.nodes.FrameNodes.StatementsFrame is represents this as a set
of statements to be guarded by a frame presence.
Input
The input is an argument tuple (the type is fixed), which contains the positional arguments, and
potentially an argument dict (type is fixed as well, but could also be NULL, indicating that there are no
keyword arguments.
Keyword dictionary
The keyword argument dictionary is checked first. Anything in there, that cannot be associated, either raise
an error, or is added to a potentially given star dict argument. So there are two major cases.
• No star dict argument: Iterate over dictionary, and assign or raise errors.
This check covers extra arguments given.
• With star dict argument: Iterate over dictionary, and assign or raise errors.
Interesting case for optimization are no positional arguments, then no check is needed, and the
keyword argument dictionary could be used as the star argument. Should it change, a copy is needed
though.
What's noteworthy here, is that in comparison to the keywords, we can hope that they are the same value
as we use. The interning of strings increases chances for non-compiled code to do that, esp. for short
names.
We then can do a simple is comparison and only fall back to real string == comparisons, after all of these
failed. That means more code, but also a lot faster code in the positive case.
Argument tuple
After this completed, the argument tuple is up for processing. The first thing it needs to do is to check if it's
too many of them, and then to complain.
For arguments in Python2, there is the possibility of them being nested, in which case they cannot be
provided in the keyword dictionary, and merely should get picked from the argument tuple.
Otherwise, the length of the argument tuple should be checked against its position and if possible, values
should be taken from there. If it's already set (from the keyword dictionary), raise an error instead.
"Nuitka Developer Manual - Parameter Parsing"
• variable_actives
Dictionary, where per "variable" the currently used version is. Used to track situations changes in
branches. This is the main input for merge process.
• variable_traces
Dictionary, where "variable" and "version" form the key. The values are objects with or without an
assignment, and a list of usages, which starts out empty.
These objects have usages appended to them. In "onVariableSet", a new version is allocated, which
gives a new object for the dictionary, with an empty usages list, because each write starts a new
version. In "onVariableUsage" the version is detected from the current version. It may be not set yet,
which means, it's a read of an undefined value (local variable, not a parameter name), or unknown in
case of global variable.
These objects may be told that their value has escaped. This should influence the value friend they
attached to the initial assignment. Each usage may have a current value friend state that is different.
When merging branches of conditional statements, the merge shall apply as follows:
Note
For conditional expressions, there are always only two branches. Even if you think you have more
than one branch, you do not. It's always nested branches, already when it comes out of the ast
parser.
Loop SSA
For loops we have the addition difficulty that we need would need to look ahead what types a variable has
at loop exit, but that is a cyclic dependency.
Our solution is to consider the variable types at loop entry. When these change, we drop all gained
information from inside the loop. We may e.g. think that a variable is a int or float, but later recognize
that it can only be a float. Derivations from int must be discarded, and the loop analysis restarted.
Then during the loop, we assign an incomplete loop trace shape to the variable, which e.g. says it was an
int initially and additional type shapes, e.g. int or long are then derived. If at the end of the loop, a
type produced no new types, we know we are finished and mark the trace as a complete loop trace.
If it is not, and next time, we have the same initial types, we add the ones derived from this to the starting
values, and see if this gives more types.
1.0 + something
This something will not just blindly work when it's a float, but go through a slot mechanism, which then can
be overloaded.
class SomeStrangeFloat:
def __float__(self):
return 3.14
something = SomeStrangeFloat()
# ...
1.0 + float(something) // 4.140000000000001
As a deliberate choice, there is no __list__ slot used. The Python designers are aiming at solving many
things with slots, but they also accept limitations.
There are many slots that are frequently used, most often behind your back (__iter__, __next__,
__lt__, etc.). The list is large, and tends to grow with Python releases, but it is not endless.
Representation in Nuitka
So a slot in Nuitka typically has an owning node. We use __len__ as an example here. In the
computeExpression the len node named ExpressionBuiltinLen has to defer the decision what it
computes to its argument.
That decision then, in the absence of any type knowledge, must be done absolutely carefully and
conservative, as could see anything executing here.
That examples this code in ExpressionBase which every expression by default uses:
has_len = shape.hasShapeSlotLen()
if has_len is False:
return makeRaiseTypeErrorExceptionReplacementFromTemplateAndValue(
template="object of type '%s' has no len()",
operation="len",
original_node=len_node,
value_node=self,
)
elif has_len is True:
iter_length = self.getIterationLength()
result = makeConstantRefNode(
constant=int(iter_length), # make sure to downcast long
source_ref=len_node.
# Any exception may be raised.
trace_collection.onExceptionRaiseExit(BaseException)
Notice how by default, known __len__ but unpredictable or even unknown if a __len__ slot is there, the
code indicates that its contents and the control flow escapes (could change things behind out back) and
any exception could happen.
Other expressions can know better, e.g. for compile time constants we can be a whole lot more certain:
In this case, we are using a function that will produce a concrete value or the exception that the
computation function raised. In this case, we can let the Python interpreter that runs Nuitka do all the
hard work. This lives in CompileTimeConstantExpressionBase and is the base for all kinds of
constant values, or even built-in references like the name len itself and would be used in case of doing
len(len) which obviously gives an exception.
Other overloads do not currently exist in Nuitka, but through the iteration length, most cases could be
addressed, e.g. list nodes typical know their element counts.
The C side
When a slot is not optimized away at compile time however, we need to generate actual code for it. We
figure out what this could be by looking at the original CPython implementation.
res = PyObject_Size(v);
if (res < 0 && PyErr_Occurred())
return NULL;
return PyInt_FromSsize_t(res);
}
We find a pointer to PyObject_Size which is a generic Python C/API function used in the builtin_len
implementation:
"Nuitka Developer Manual - Built-in call optimization"
m = o->ob_type->tp_as_sequence;
if (m && m->sq_length)
return m->sq_length(o);
return PyMapping_Size(o);
}
On the C level, every Python object (the PyObject *) as a type named ob_type and most of its
elements are slots. Sometimes they form a group, here tp_as_sequence and then it may or may not
contain a function. This one is tried in preference. Then, if that fails, next up the mapping size is tried.
if (o == NULL) {
null_error();
return -1;
}
m = o->ob_type->tp_as_mapping;
if (m && m->mp_length)
return m->mp_length(o);
This is the same principle, except with tp_as_mapping and mp_length used.
So from this, we can tell how len gets at what could be a Python class __len__ or other built-in types.
In principle, every slot needs to be dealt with in Nuitka, and it is assumed that currently all slots are
supported on at least a very defensive level, to avoid unnoticed escapes of control flow.
Exceptions
To handle and work with exceptions, every construct that can raise has either a bool or int return code
or PyObject * with NULL return value. This is very much in line with that the Python C-API does.
Every helper function that contains code that might raise needs these variables. After a failed call, our
variant of PyErr_Fetch called FETCH_ERROR_OCCURRED must be used to catch the defined error,
unless some quick exception cases apply. The quick exception means, NULL return from C-API without a
set exception means e.g. StopIteration.
As an optimization, functions that raise exceptions, but are known not to do so, for whatever reason, could
only be asserted to not do so.
Exit Targets
Each error or other exit releases statement temporary values and then executes a goto to the exit target.
These targets need to be setup. The try/except will e.g. catch error exits.
Other exits are continue, break, and return exits. They all work alike.
Generally, the exits stack of with constructs that need to register themselves for some exit types. A loop
e.g. registers the continue exit, and a contained try/finally too, so it can execute the final code
should it be needed.
Frames
Frames are containers for variable declarations and cleanups. As such, frames provide error exits and
success exits, which remove the frame from the frame stack, and then proceed to the parent exit.
With the use of non PyObject ** C types, but frame exception exits, the need to convert those types
becomes apparent. Exceptions should still resolve the C version. When using different C types at frame
exception exits, there is a need to trace the active type, so it can be used in the correct form.
Abortive Statements
The way try/finally is handled, copies of the finally block are made, and optimized independently
for each abort method. The ones there are of course, return, continue, and break, but also implicit
and explicit raise of an exception.
Code trailing an abortive statement can be discarded, and the control flow will follow these "exits".
Constant Preparation
Early versions of Nuitka, created all constants for the whole program for ready access to generated code,
before the program launches. It did so in a single file, but that approach didn't scale well.
Problems were
• Even unused code contributed to start-up time, this can become a lot for large programs, especially in
standalone mode.
• The massive amount of constant creation codes gave backend C compilers a much harder time than
necessary to analyse it all at once.
The current approach is as follows. Code generation detects constants used in only one module, and
declared static there, if the module is the only user, or extern if it is not. Some values are forced to be
global, as they are used pre-main or in helpers.
These extern values are globally created before anything is used. The static values are created when
the module is loaded, i.e. something did import it.
We trace used constants per module, and for nested ones, we also associate them. The global constants
code is special in that it can only use static for nested values it exclusively uses, and has to export
values that others use.
assert value
# Absolutely the same as:
if not value:
raise AssertionError
This makes assertions absolutely the same as a raise exception in a conditional statement.
This transformation is performed at tree building already, so Nuitka never knows about assert as an
element and standard optimizations apply. If e.g. the truth value of the assertion can be predicted, the
conditional statement will have the branch statically executed or removed.
c<d
This transformation is performed at tree building already. The temporary variables keep the value for the
use of the same expression. Only the last expression needs no temporary variable to keep it.
def _comparison_chain(): # So called "outline" function
What we got from
tmp_a = athis, is making the checks of the comparison chain explicit and comparisons in Nuitka to
be internally always about two operands only.
tmp_b = b()
tmp
Note = tmp_b > tmp_c
Thisifallows
not optimizations
tmp: to discover the file opening nature easily and apply file embedding or
whateverreturn
we will have
tmp there one day.
del tmp_b
This transformation is performed when the execfile built-in is detected as such during optimization.
_comparison_chain()
"Nuitka Developer Manual - page 27 - Constant Preparation"
Generator expressions with yield
These are converted at tree building time into a generator function body that yields from the iterator given,
which is the put into a for loop to iterate, created a lambda function of and then called with the first iterator.
That eliminates the generator expression for this case. It's a bizarre construct and with this trick needs no
special code generation.
This is a complex example, demonstrating multiple cases of yield in unexpected cases:
Function Decorators
When one learns about decorators, you see that:
@decorator
def function():
pass
"Nuitka Developer Manual - Constant Preparation"
return a, b, c
a, b = _1
return _tmp(a, b, c)
The .1 is the variable name used by CPython internally, and actually works if you use keyword arguments
via star dictionary. So this is very compatible and actually the right kind of re-formulation, but it removes
the need from the code that does parameter parsing to deal with these.
Obviously, there is no frame for _tmp, just one for function and we do not use local variables, but
temporary functions.
In-place Assignments
In-place assignments are re-formulated to an expression using temporary variables.
These are not as much a reformulation of += to +, but instead one which makes it explicit that the assign
target may change its value.
a += b
_tmp = a.__iadd__(b)
if a is not _tmp:
a = _tmp
Using __iadd__ here to express that for the +, the in-place variant iadd is used instead. The is check
may be optimized away depending on type and value knowledge later on.
Complex Assignments
Complex assignments are defined as those with multiple targets to assign from a single source and are
re-formulated to such using a temporary variable and multiple simple assignments instead.
a = b = c
_tmp = c
a = _tmp
b = _tmp
del _tmp
This is possible, because in Python, if one assignment fails, it can just be interrupted, so in fact, they are
sequential, and all that is required is to not calculate c twice, which the temporary variable takes care of.
Were b a more complex expression, e.g. b.some_attribute that might raise an exception, a would still
be assigned.
Unpacking Assignments
Unpacking assignments are re-formulated to use temporary variables as well.
Becomes this:
_tmp = h()
_iter1 = iter(_tmp)
_tmp1 = unpack(_iter1, 3)
_tmp2 = unpack(_iter1, 3)
_tmp3 = unpack(_iter1, 3)
unpack_check(_iter1)
a = _tmp1
b.attr = _tmp2
c[ind] = _tmp3
d = _tmp
_iter2 = iter(_tmp)
_tmp4 = unpack(_iter2, 3)
_tmp5 = unpack(_iter2, 3)
_tmp6 = unpack(_iter2, 3)
unpack_check(_iter1)
e = _tmp4
f = _tmp5
g = _tmp6
That way, the unpacking is decomposed into multiple simple statements. It will be the job of optimizations
to try and remove unnecessary unpacking, in case e.g. the source is a known tuple or list creation.
Note
The unpack is a special node which is a form of next that will raise a ValueError when it cannot
get the next value, rather than a StopIteration. The message text contains the number of
values to unpack, therefore the integer argument.
Note
The unpack_check is a special node that raises a ValueError exception if the iterator is not
finished, i.e. there are more values to unpack. Again the number of values to unpack is provided to
construct the error message.
With Statements
The with statements are re-formulated to use temporary variables as well. The taking and calling of
__enter__ and __exit__ with arguments, is presented with standard operations instead. The promise
to call __exit__ is fulfilled by try/except clause instead.
with some_context as x:
something(x)
tmp_source = some_context
_iter = iter(iterable)
_no_break_indicator = False
while 1:
try:
_tmp_value = next(_iter)
except StopIteration:
# Set the indicator that the else branch may be executed.
_no_break_indicator = True
if something(x):
break
if _no_break_indicator:
otherwise()
"Nuitka Developer Manual - Constant Preparation"
while 1:
if not condition:
break
something()
This is to totally remove the specialization of loops, with the condition moved to the loop body in an initial
conditional statement, which contains a break statement.
That achieves, that only break statements exit the loop, and allow for optimization to remove always true
loop conditions, without concerning code generation about it, and to detect such a situation, consider e.g.
endless loops.
Note
Loop analysis (not yet done) can then work on a reduced problem (which break statements are
executed under what conditions) and is then automatically very general.
The fact that the loop body may not be entered at all, is still optimized, but also in the general
sense. Explicit breaks at the loop start and loop conditions are the same.
Exception Handlers
Exception handlers in Python may assign the caught exception value to a variable in the handler definition.
And the different handlers are represented as conditional checks on the result of comparison operations.
try:
block()
except A as e:
handlerA(e)
except B as e:
handlerB(e)
else:
handlerElse()
try:
block()
except:
# These are special nodes that access the exception, and don't really
# use the "sys" module.
tmp_exc_type = sys.exc_info()[0]
tmp_exc_value = sys.exc_info()[1]
handlerB(e)
else:
handlerElse()
For Python3, the assigned e variables get deleted at the end of the handler block. Should that value be
already deleted, that del does not raise, therefore it's tolerant. This has to be done in any case, so for
Python3 it is even more complex.
try:
block()
except:
# These are special nodes that access the exception, and don't really
# use the "sys" module.
tmp_exc_type = sys.exc_info()[0]
tmp_exc_value = sys.exc_info()[1]
some_member = 3
def _makeSomeClass():
# The module name becomes a normal local variable too.
__module__ = "SomeModule"
some_member = 3
return locals()
# The function that creates the class dictionary. Receives temporary variables
# to work with.
def _makeSomeClass():
# This has effect, currently I don't know how to express that in Python3
# syntax, but we will have a node that does that.
locals().replace(tmp_prepared)
some_member = 3
return __class__
Generator Expressions
There are re-formulated as functions.
Generally they are turned into calls of function bodies with (potentially nested) for loops:
def _listcontr_helper(__iterator):
result = []
for x in __iterator:
if cond():
result.append(x * 2)
return result
list_value = _listcontr_helper(range(8))
The difference is that with Python3, the function "_listcontr_helper" is really there and named
<listcontraction> (or <listcomp> as of Python3.7 or higher), whereas with Python2 the function is
only an outline, so it can readily access the containing name space.
Set Contractions
The set contractions of Python2.7 are like list contractions in Python3, in that they produce an actual
helper function:
def _setcontr_helper(__iterator):
result = set()
for x in __iterator:
if cond():
result.add(x * 2)
return result
set_value = _setcontr_helper(range(8))
Dictionary Contractions
The dictionary contractions of are like list contractions in Python3, in that they produce an actual helper
function:
for x in __iterator:
if cond():
result[x] = x * 2
return result
set_value = _dictcontr_helper(range(8))
Simple Calls
As seen below, even complex calls are simple calls. In simple calls of Python there is still some hidden
semantic going on, that we expose.
On the C-API level there is a tuple and dictionary built. This one is exposed:
A called function will access this tuple and the dictionary to parse the arguments, once that is also
re-formulated (argument parsing), it can then lead to simple in-lining. This way calls only have 2 arguments
with constant semantics, that fits perfectly with the C-API where it is the same, so it is actually easier for
code generation.
Although the above looks like a complex call, it actually is not. No checks are needed for the types of the
star arguments and it's directly translated to PyObject_Call.
Complex Calls
The call operator in Python allows to provide arguments in 4 forms.
The task here is that first all the arguments are evaluated, left to right, and then they are merged into only
two, that is positional and named arguments only. for this, the star list argument and the star dictionary
arguments, are merged with the positional and named arguments.
What's peculiar, is that if both the star list and dictionary arguments are present, the merging is first done
for star dictionary, and only after that for the star list argument. This makes a difference, because in case
of an error, the star argument raises first.
something(*1, **2)
This raises "TypeError: something() argument after ** must be a mapping, not int" as opposed to a
possibly more expected "TypeError: something() argument after * must be a sequence, not int."
That doesn't matter much though, because the value is to be evaluated first anyway, and the check is only
performed afterwards. If the star list argument calculation gives an error, this one is raised before checking
the star dictionary argument.
So, what we do, is we convert complex calls by the way of special functions, which handle the dirty work
for us. The optimization is then tasked to do the difficult stuff. Our example becomes this:
returned = _complex_call(
called=something,
pos=(pos1, pos2),
named={"name1": named1, "name2": named2},
star_list_arg=star_list,
star_dict_arg=star_dict,
)
The call to _complex_call is be a direct function call with no parameter parsing overhead. And the call
in its end, is a special call operation, which relates to the PyObject_Call C-API.
Assignment Expressions
In Python 3.8 or higher, you assign inside expressions.
if (x := cond()):
do_something()
# Doesn't exist with that name, and it is not really taking closure variables,
# it just shares the execution context.
def _outline_func():
nonlocal x
return x
if (_outline_func()):
do_something
When we use this outline function, we are allowed statements, even assignments, in expressions. For
optimization, they of course pose a challenge to be removed ever, only happens when it becomes only a
return statement, but they do not cause much difficulties for code generation, since they are transparent.
Match Statements
In Python 3.10 or higher, you can write so called match statements like this:
match something():
case [x] if x:
z = 2
case _ as y if y == x and y:
z = 1
case 0:
z = 0
This is in Nuitka converted so that the code generation for print doesn't do any conversions itself
anymore and relies on the string nature of its input.
Only string objects are spared from the str built-in wrapper, because that would only cause noise in
optimization stage. Later optimization can then find it unnecessary for certain arguments.
Additionally, each print may have a target, and multiple arguments, which we break down as well for
dumber code generation. The target is evaluated first and should be a file, kept referenced throughout the
whole print statement.
try:
tmp_target = target_file
finally:
del tmp_target
This allows code generation to not deal with arbitrary amount of arguments to print. It also separates the
newline indicator from the rest of things, which makes sense too, having it as a special node, as it's
behavior with regards to soft-space is different of course.
And finally, for print without a target, we still assume that a target was given, which would be
sys.stdout in a rather hard-coded way (no variable look-ups involved).
# could be more
...
tmp_result = []
try:
while 1:
tmp_result.append(
(
next(tmp_iter_1),
next(tmp_iter_2),
next(tmp_iter_3),
# more arguments here ...
)
)
except StopIteration:
pass
return tmp_result
Builtin min
result = tmp_arg1
if keyfunc is None: # can be decided during re-formulation
tmp_key_result = keyfunc(result)
tmp_key_candidate = keyfunc(tmp_arg2)
if tmp_key_candidate < tmp_key_result:
result = tmp_arg2
tmp_key_result = tmp_key_candidate
tmp_key_candidate = keyfunc(tmp_arg3)
if tmp_key_candidate < tmp_key_result:
result = tmp_arg3
tmp_key_result = tmp_key_candidate
# more arguments here ...
else:
if tmp_arg2 < result:
result = tmp_arg2
if tmp_arg3 < result:
result = tmp_arg3
# more arguments here ...
return result
Builtin max
See min just with > instead of <.
x = f(a, b + c)
Note
The lambda stands here for a reference to the function, rather than a variable reference, this is the
normal forward propagation of values, and does not imply duplicating or moving any code at all.
At this point, we still have not resolved the actual call arguments to the variable names, still a Python level
function is created, and called, and arguments are parsed to a tuple, and from a tuple. For simplicity sake,
we have left out keyword arguments out of the equation for now, but they are even more costly.
So now, what we want to do, is to re-formulate the call into what we call an outline body, which is a inline
function, and that does the parameter parsing already and contains the function code too. In this inlining,
there still is a function, but it's technically not a Python function anymore, just something that is an
expression whose value is determined by control flow and the function call.
def _f():
tmp_arg1 = arg1
tmp_arg2 = b + c
return tmp_arg1 + tmp_arg2
x = _f()
With this, a function is considered inlined, because it becomes part of the abstract execution, and the
actual code is duplicated.
The point is, that matching the signature of the function to the actual arguments given, is pretty straight
forward in many cases, but there are two forms of complications that can happen. One is default values,
because they need to assigned or not, and the other is keyword arguments, because they allow to reorder
arguments.
Let's consider an example with default values first.
x = f(a, b + c)
Since the point, at which defaults are taken, we must execute them at that point and make them available.
def _f():
tmp_arg1 = arg1
tmp_arg2 = tmp_defaults[0]
return tmp_arg1 + tmp_arg2
x = _f()
Now, one where keyword arguments are ordered the other way.
"Nuitka Developer Manual - Nodes that serve special purposes"
Releases
When a function exits, the local variables are to be released. The same applies to temporary variables
used in re-formulations. These releases cause a reference to the object to the released, but no value
change. They are typically the last use of the object in the function.
The are similar to del, but make no value change. For shared variables this effect is most visible.
Side Effects
When an exception is bound to occur, and this can be determined at compile time, Nuitka will not generate
the code the leads to the exception, but directly just raise it. But not in all cases, this is the full thing.
Consider this code:
f(a(), 1 / 0)
The second argument will create a ZeroDivisionError exception, but before that a() must be
executed, but the call to f will never happen and no code is needed for that, but the name look-up must
still succeed. This then leads to code that is internally like this:
f(a(), raise_ZeroDivisionError())
side_effect(a(), f, raise_ZeroDivisionError())
where we can consider side_effect to be a function that returns the last expression. Of course, if this is
not part of another expression, but close to statement level, side effects, can be converted to multiple
statements simply.
Another use case, is that the value of an expression can be predicted, but that the language still requires
things to happen, consider this:
a = len((f(), g()))
We can tell that a will be 2, but the call to f and g must still be performed, so it becomes:
a = side_effects(f(), g(), 2)
Modelling side effects explicitly has the advantage of recognizing them easily and allowing to drop the call
to the tuple building and checking its length, only to release it.
"Nuitka Developer Manual - page 48 - Optimizing Attribute Lookups into Method Calls for Built-ins types"
"Nuitka Developer Manual - Type Inference - The Discussion"
Note
The "cffi" interface maybe won't have the issue, but it's not something we need to write or test
the code for.
2. Allowance: May use ctypes module at compile time to ask things about ctypes and its types.
3. Goal: Should make use of ctypes, to e.g. not hard code in Nuitka what ctypes.c_int() gives on
the current platform, unless there is a specific benefit.
4. Allowance: Not all ctypes usages must be supported immediately.
5. Goal: Try and be as general as possible.
For the compiler, ctypes support should be hidden behind a generic interface of some sort.
Supporting math module should be the same thing.
a = 3
b = a
b += 4 # a is not changed
a = [3]
b = a
If we cannot tell, we must assume that a might be changed. It's either b or what a was before. If the type is
not mutable, we can assume the aliasing to be broken up, and if it is, we can assume both to be the same
value still.
When that value is a compile time constant, we will want to push it forward, and we do that with
"(Constant) Value Propagation", which is implemented already. We avoid too large constants, and we
properly trace value assignments, but not yet aliases.
In order to fully benefit from type knowledge, the new type system must be able to be fully friends with
existing built-in types, but for classes to also work with it, it should not be tied to them. The behavior of a
type long, str, etc. ought to be implemented as far as possible with the built-in long, str at compiled
time as well.
len("a" *
"Nuitka Developer Manual - Applying this to "ctypes""
And even if x were used, only the ability to predict the value from a function would be interesting, so we
would use that computation function instead of having an iteration source. Being able to predict from a
function could mean to have Python code to do it, as well as C code to do it. Then code for the loop can be
generated without any CPython library usage at all.
Note
Of course, it would only make sense where such calculations are "O(1)" complexity, i.e. do not
require recursion like "n!" does.
The other thing is that CPython appears to at - run time - take length hints from objects for some
operations, and there it would help too, to track length of objects, and provide it, to outside code.
Back to the original example:
len("a" * 1000000000000)
The theme here, is that when we can't compute all intermediate expressions, and we sure can't do it in the
general case. But we can still, predict some of properties of an expression result, more or less.
Here we have len to look at an argument that we know the size of. Great. We need to ask if there are any
side effects, and if there are, we need to maintain them of course. This is already done by existing
optimization if an operation generates an exception.
Note
The optimization of len has been implemented and works for all kinds of container creation and
ranges.
import ctypes
This leads to Nuitka in its tree to have an assignment from a __import__ expression to the variable
ctypes. It can be predicted by default to be a module object, and even better, it can be known as ctypes
from standard library with more or less certainty. See the section about "Importing".
So that part is "easy", and it's what will happen. During optimization, when the module __import__
expression is examined, it should say:
• ctypes is a module
• ctypes is from standard library (if it is, might not be true)
• ctypes then has code behind it, called ModuleFriend that knows things about it attributes, that
should be asked.
The later is the generic interface, and the optimization should connect the two, of course via package and
module full names. It will need a ModuleFriendRegistry, from which it can be pulled. It would be nice if
we can avoid ctypes to be loaded into Nuitka unless necessary, so these need to be more like a plug-in,
loaded only if necessary, i.e. the user code actually uses ctypes.
Coming back to the original expression, it also contains an assignment expression, because it
re-formulated to be more like this:
ctypes = __import__("ctypes")
The Ô;eôû×A4Ê<$ƒ`L6¶Åªµ×çÅ”hî¹ßãdºassigned to object, simply gets the type inferred propagated as part of an SSA fo
be sure that nothing in the program changes the variable, and therefore have only one version of that
variable.
For module variables, when the execution leaves the module to unknown code, or unclear code, it might
change the variable. Therefore, likely we will often only assume that it could still be ctypes, but also
something else.
Depending on how well we control module variable assignment, we can decide this more of less quickly.
With "compiled modules" types, the expectation is that it's merely a quick C == comparison check. The
module friend should offer code to allow a check if it applies, for uncertain cases.
Then when we come to uses of it:
ctypes.c_int()
At this point, using SSA, we are more of less sure, that ctypes is at that point the module, and that we
know what it's c_int attribute is, at compile time, and what it's call result is. We will use the module friend
to help with that. It will attach knowledge about the result of that expression during the SSA collection
process.
This is more like a value forward propagation than anything else. In fact, constant propagation should only
be the special case of it, and one design goal of Nuitka was always to cover these two cases with the
same code.
Excursion to Functions
In order to decide what this means to functions and their call boundaries, if we propagate forward, how to
handle this:
return a
We annotate that a is first a "unknown but defined parameter object", then later on something that
definitely has an append attribute, when returned, as otherwise an exception occurs.
The type of a changes to that after a.append look-up succeeds. It might be many kinds of an object, but
e.g. it could have a higher probability of being a PyListObject. And we would know it cannot be a
PyStringObject, as that one has no append method, and would have raised an exception therefore.
Note
If classes, i.e. other types in the program, have an append attribute, it should play a role too, there
needs to be a way to plug-in to this decisions.
Note
On the other hand, types without append attribute can be eliminated.
Therefore, functions through SSA provide an automatic analysis on their return state, or return value types,
or a quick way to predict return value properties, based on input value knowledge.
So this could work:
b = my_append([], 3)
Goal: The structure we use makes it easy to tell what my_append may be. So, there should be a means to
ask it about call results with given type/value information. We need to be able to tell, if evaluating
my_append makes sense with given parameters or not, if it does impact the return value.
We should e.g. be able to make my_append tell, one or more of these:
• Returns the first parameter value as return value (unless it raises an exception).
• The return value has the same type as a (unless it raises an exception).
• The return value has an append attribute.
• The return value might be a list object.
• The return value may not be a str object.
• The function will raise if first argument has no append attribute.
The exactness of statements may vary. But some things may be more interesting. If e.g. the aliasing of a
parameter value to the return value is known exactly, then information about it need to all be given up, but
some can survive.
It would be nice, if my_append had sufficient information, so we could specialize with list and int from
the parameters, and then e.g. know at least some things that it does in that case. Such specialization
would have to be decided if it makes sense. In the alternative, it could be done for each variant anyway, as
there won't be that many of them.
Doing this "forward" analysis appears to be best suited for functions and therefore long term. We will try it
that way.
Excursion to Loops
a = 1
if cond():
break
print(a)
"Nuitka Developer Manual - Excursion to Conditions"
The handling of loops (both for and while are re-formulated to this kind of loops with break statements)
has its own problem. The loop start and may have an assumption from before it started, that a is constant,
but that is only true for the first iteration. So, we can't pass knowledge from outside loop forward directly
into the for loop body.
So the collection for loops needs to be two pass for loops. First, to collect assignments, and merge these
into the start state, before entering the loop body. The need to make two passes is special to loops.
For a start, it is done like this. At loop entry, all pre-existing, but written traces, are turned into loop merges.
Knowledge is not completely removed about everything assigned or changed in the loop, but then it's not
trusted anymore.
From that basis, the break exits are analysed, and merged, building up the post loop state, and
continue exits of the loop replacing the unknown part of the loop entry state. The loop end is considered
a continue for this purpose.
Excursion to Conditions
if cond:
x = 1
else:
x = 2
b = x < 3
The above code contains a condition, and these have the problem, that when exiting the conditional block,
a merge must be done, of the x versions. It could be either one. The merge may trace the condition under
which a choice is taken. That way, we could decide pairs of traces under the same condition.
These merges of SSA variable "versions", represent alternative values. They pose difficulties, and might
have to be reduced to commonality. In the above example, the < operator will have to check for each
version, and then to decide that both indeed give the same result.
The trace collection tracks variable changes in conditional branches, and then merges the existing state at
conditional statement exits.
Note
A branch is considered "exiting" if it is not abortive. Should it end in a raise, break, continue, or
return, there is no need to merge that branch, as execution of that branch is terminated.
Should both branches be abortive, that makes things really simple, as there is no need to even
continue.
Should only one branch exist, but be abortive, then no merge is needed, and the collection can
assume after the conditional statement, that the branch was not taken, and continue.
When exiting both the branches, these branches must both be merged, with their new information.
In the above case:
b = x < 3
if type(a) is list:
a.append(x)
else:
a += (x,)
In this case, the knowledge that a is a list, could be used to generate better code and with the definite
knowledge that a is of type list. With that knowledge the append attribute call will become the list built-in
type operation.
Mixed Types
Consider the following inside a function or module:
A programmer will often not make a difference between list and tuple. In fact, using a tuple is a good
way to express that something won't be changed later, as these are mutable.
Note
Better programming style, would be to use this:
People don't do it, because they dislike the performance hit encountered by the generator
expression being used to initialize the tuple. But it would be more consistent, and so Nuitka is using
it, and of course one day Nuitka ought to be able to make no difference in performance for it.
To Nuitka though this means, that if cond is not predictable, after the conditional statement we may either
have a tuple or a list type object in a. In order to represent that without resorting to "I know nothing
about it", we need a kind of min/max operating mechanism that is capable of say what is common with
multiple alternative values.
Note
At this time, we don't really have that mechanism to find the commonality between values.
Back to "ctypes"
v = ctypes.c_int()
Coming back to this example, we needed to propagate ctypes, then we can propagate "something" from
ctypes.int and then known what this gives with a call and no arguments, so the walk of the nodes, and
diverse operations should be addressed by a module friend.
In case a module friend doesn't know what to do, it needs to say so by default. This should be enforced by
a base class and give a warning or note.
a = 1
b = a + a
In this example, the references to a, can look-up the 1 in the trace, and base value shape response
to + on it. For compile time evaluation, it may also ask isCompileTimeConstant() and if both
nodes will respond True, then "getCompileTimeConstant()" will return 1, which will be be used in
computation.
Then extractSideEffects() for the a refeOÀ“´Êô-7î‚idh'YRº•5_¬-Œmzbo$“rence will return () and therefore, t
be wrapped.
An alternative approach would be hasTypeSlotAdd() on the both nodes, and they both do, to see
if the selection mechanism used by CPython can be used to find which types + should be used.
• Class for module import expression ExpressionImportModule.
This one just knows that something is imported, but not how or what it is assigned to. It will be able in
a recursive compile, to provide the module as an assignment source, or the module variables or
submodules as an attribute source when referenced from a variable trace or in an expression.
• Base class for module friend ModuleFriendBase.
This is intended to provide something to overload, which e.g. can handle math in a better way.
• Module ModuleFriendRegistry
Provides a register function with name and instances of ValueFriendModuleBase to be registered.
Recursed to modules should integrate with that too. The registry could well be done with a metaclass
approach.
• The module friends should each live in a module of their own.
With a naming policy to be determined. These modules should add themselves via above mechanism
to ModuleFriendRegistry and all shall be imported and register. Importing of e.g. ctypes should
be delayed to when the friend is actually used. A meta class should aid this task.
The delay will avoid unnecessary blot of the compiler at run time, if no such module is used. For "qt"
and other complex stuff, this will be a must.
• The walk should initially be single pass, and not maintain history.
Instead optimization that needs to look at multiple things, e.g. "unused assignment", will look at the
whole SSA collection afterwards.
# import gives a module any case, and the "ModuleRegistry" may say more.
import ctypes
# From import need not give module, "x" decides what it is.
from x import y
The optimization is mostly performed by walking of the tree and performing trace collection. When it
encounters assignments and references to them, it considers current state of traces and uses it for
computeExpression.
Note
Assignments to attributes, indexes, slices, etc. will also need to follow the flow of append, so it
cannot escape attention that a list may be modified. Usages of append that we cannot be sure
about, must be traced to exist, and disallow the list to be considered known value again.
Instead, ctypes value friend will be asked give Identifiers, like other codes do too. And these need to
be able to convert themselves to objects to work with the other things.
But Code Generation should no longer require that operations must be performed on that level. Imagine
e.g. the following calls:
c_call(other_c_call())
Value returned by "other_c_call()" of say c_int type, should be possible to be fed directly into another
call. That should be easy by having a asIntC() in the identifier classes, which the ctypes Identifiers
handle without conversions.
Code Generation should one day also become able to tell that all uses of a variable have only c_int
value, and use int instead of PyObjectLocalVariable more or less directly. We could consider
PyIntLocalVariable of similar complexity as int after the C++ compiler performed its in-lining.
Such decisions would be prepared by finalization, which then would track the history of values throughout
a function or part of it.
Initial Implementation
The basic interface will be added to all expressions and a node may override it, potentially using trace
collection state, as attached during computeExpression.
Goal 1 (Reached)
Initially most things will only be able to give up on about anything. And it will be little more than a tool to do
simple look-ups in a general form. It will then be the first goal to turn the following code into better
performing one:
a = 3
b =7
c = a / b
print(c)
to:
a = 3
b = 7
c = 3 / 7
print(c)
and then:
a = 3
b = 7
c = 0
print(c)
and then:
a = 3
b = 7
c = 0
print(0)
This depends on SSA form to be able to tell us the values of a, b, and c to be written to by constants,
which can be forward propagated at no cost.
Goal 2 (Reached)
The assignments to a, b, and c shall all become prey to "unused" assignment analysis in the next step.
They are all only assigned to, and the assignment source has no effect, so they can be simply dropped.
print(0)
In the SSA form, these are then assignments without references. These assignments, can be removed if
the assignment source has no side effect. Or at least they could be made "anonymous", i.e. use a
temporary variable instead of the named one. That would have to take into account though, that the old
version still needs a release.
The most general form would first merely remove assignments that have no impact, and leave the value as
a side effect, so we arrive at this first:
3
7
0
print(0)
When applying the removal of expression only statements without effect, this gives us:
print(0)
which is the perfect result. Doing it in one step would only be an optimization at the cost of generalization.
In order to be able to manipulate nodes related to a variable trace, we need to attach the nodes that did it.
Consider this:
if cond():
x = 1
elif other():
x = 3
In the above case, the merge of the value traces, should say that x may be undefined, or one of 1 or 3, but
since x is not used, apply the "dead value" trick to each branch.
The removal of the "merge" of the 3 x versions, should exhibit that the other versions are also only
assigned to, and can be removed. These merges of course appear as usages of the x versions.
Goal 3
Then third goal is to understand all of this:
print(a)
for i in range(1000):
print(a)
a.
"Nuitka Developer Manual - Limitations for now"
if cond:
x = 1
else:
x = 2
return x < y
In this we have a branch, and we will be required to keep track of both the branches separately, and then
to merge with the original knowledge. After the conditional statement we will know that "x" is an "int" with
possible values in (1,2), which can be used to predict that the return value is always True.
The forth goal will therefore be that the "ValueFriendConstantList" knows that append changes a value,
but it remains a list, and that the size increases by one. It should provide an other value friend
"ValueFriendList" for "a" due to that.
In order to do that, such code must be considered:
a = []
a.append(1)
a.append(2)
print(len(a))
It will be good, if len still knows that a is a list object, but not the constant list anymore.
From here, work should be done to demonstrate the correctness of it with the basic tests applied to
discover undetected issues.
Fifth and optional goal: Extra bonus points for being able to track and predict append to update the
constant list in a known way. Using list.append that should be done and lead to a constant result of
len being used.
The sixth and challenging goal will be to make the code generation be impacted by the value friends types.
It should have a knowledge that PyList_Append does the job of append and use PyList_Size for len.
The "ValueFriends" should aid the code generation too.
Last and right now optional goal will be to make range have a value friend, that can interact with iteration
of the for loop, and append of the list value friend, so it knows it's possible to iterate 5000 times, and
that "a" has then after the "loop" this size, so len(a) could be predicted. For during the loop, about a the
range of its length should be known to be less than 5000. That would make the code of goal 2 completely
analyzed at compile time.
print(ctypes.c_int(17) + ctypes.c_long(19))
Later then call to "libc" or something else universally available, e.g.V3žAr¼Zi£ao;E]ùó2ü_`•@|³ù1Ž "strlen()" or "s
blown declarations of the callable.
• We won't have the ability to test that optimization are actually performed, we will check the generated
code by hand.
With time, we will add XML based checks with "xpath" queries, expressed as hints, but that is some
work that will be based on this work here. The "hints" fits into the "ValueFriends" concept nicely or so
the hope is.
• No inter-function optimization functions yet
Of course, once in place, it will make the ctypes annotation even more usable. Using ctypes
objects inside functions, while creating them on the module level, is therefore not immediately going
to work.
• No loops yet
Loops break value propagation. For the ctypes use case, this won't be much of a difficulty. Due to
the strangeness of the task, it should be tackled later on at a higher priority.
• Not too much.
Try and get simple things to work now. We shall see, what kinds of constraints really make the most
sense. Understanding list subscript/slice values e.g. is not strictly useful for much code and should
not block us.
Note
This design is not likely to be the final one.
Command Line
Experimental features are enabled with the command line argument
In C code
In Scons, all experimental features automatically are converted into C defines, and can be used like this:
#ifdef _NUITKA_EXPERIMENTAL_JINJA_GENERATED_ADD
#include "HelpersOperationGeneratedBinaryAdd.c"
#else
#include "HelpersOperationBinaryAdd.c"
#endif
The C pre-processor is the only thing that makes an experimental feature usable.
In Python
You can query experimental features using Options.isExperimental() with e.g. code like this:
if Options.isExperimental("use_feature"):
experimental_code()
else:
standard_code()
When to use it
Often we need to keep feature in parallel because they are not finished, or need to be tested after merge
and should not break. Then we can do code changes that will not make a difference except when the
experimental flag is given on the command line to Nuitka.
The testing of Nuitka is very heavy weight when e.g. all Python code is compiled, and very often, it is
interesting to compare behavior with and without a change.
When to remove it
When a feature becomes default, we might choose to keep the old variant around, but normally we do not.
Then we remove the if and #if checks and drop the old code.
At this time, large scale testing will have demonstrated the viability of the code.
For both these dependencies, there is either an inline copy (Scons) that we handle to use in case, if Scons
is not available (in fact we have a version that works with Python 2.6 and 2.7 still), and also the same for
appdirs and every dependency.
But since inline copies are against the rules on some platforms that still do not contain the package, we
often even have our own wrapper which provides a minimal fallback or exposes a sane interface for the
subset of functionality that we use.
Note
Therefore, please if you consider adding one of these, get in touch with @Nuitka-pushers first
and get a green light.
We always add the version, so that when tests run on as old versions as Python 2.6, the installation would
fail with that version, so we need to make a version requirement. Sometimes we use older versions for
Python2 than for Python3, Jinaj2 being a notable candidate, but generally we ought to avoid that. For
many tools only being available for currently 3.7 or higher is good enough, esp. if they are run as
development tools, like autoformat-nuitka-source is.
Idea Bin
This an area where to drop random ideas on our minds, to later sort it out, and out it into action, which
could be code changes, plan changes, issues created, etc.
a = iter((2, 3))
b = next(a)
c = next(a)
del a
The list of development dependencies is in requirements-devel.txt and it is for example like this:
"Nuitka Developer Manual - Adding a Development Dependency"
a = iter((2, 3))
b = side_effect(next(a), 2)
c = side_effect(next(a), 3)
del a
a = iter((2, 3))
next(a)
b = 2
next(a)
c = 3
del a
When the del a is examined at the end of scope, or due to another assignment to the same
variable, ending the trace, we would have to consider of the next uses, and retrofit the information
that they had no effect.
a = iter((2, 3))
b = 2
b = 3
del a
• Aliasing
Each time an assignment is made, an alias is created. A value may have different names.
a = iter(range(9))
b = a
c = next(b)d = next(a)
If we fail to detect the aliasing nature, we will calculate d wrongly. We may incref and decref values to
trace it.
Aliasing is automatically traced already in SSA form. The b is assigned to version of a. So, that
should allow to replace it with this:
a = iter(range(9))
c = next(a)
d = next(a)
def f():
a = 1
b = 2
exec("""a+=b;c=1""")
return a, c
def f():
a = 1
b = 2
a += b #
c = 1 # MaybeLocalVariables for everything except known local ones.
return a, c
Prongs of Action
In this chapter, we keep track of prongs of action currently ongoing. This can get detailed and shows
things we strive for.
Builtin optimization
Definitely want to get built-in names under full control, so that variable references to module variables do
not have a twofold role. Currently they reference the module variable and also the potential built-in as a
fallback.
In terms of generated code size and complexity for modules with many variables and uses of them that is
horrible. But some_var (normally) cannot be a built-in and therefore needs no code to check for that each
time.
This is also critical to getting to whole program optimization. Being certain what is what there on module
level, will enable more definitely knowledge about data flows and module interfaces.
Coverage Testing
And then there is coverage, it should be taken and merged from all Python versions and OSes, but I never
managed to merge between Windows and Linux for unknown reasons.
Python3 Performance
The Python3 lock for thread state is making it slower by a lot. I have only experimental code that just
ignores the lock, but it likely only works on Linux, and I wonder why there is that lock in the first place.
Ignoring the locks cannot be good. But what updates that thread state pointer ever without a thread
change, and is this what ABI flags are about in this context, are there some that allow us to ignore the
locks.
An important bit þÄ»HÖ”lWÉy¨ýTQ‡•‡„Å~l×ÿœ‹Ëí$would be to use a thread state once acquired for as much as possibl
helpers do not accept it as an argument, but that ought to become an option, that way saving and restoring
an exception will be much faster, not to mention checking and dropping non interesting, or rewriting
exceptions.