tfp.experimental.auto_batching.instructions.Program
Stay organized with collections
Save and categorize content based on your preferences.
An auto-batchable program.
tfp.experimental.auto_batching.instructions.Program(
graph, functions, var_defs, vars_in, vars_out, var_alloc=None
)
The primary ingredient of a Program is the control flow graph of
operations to perform. The operation language is a union that
serves two purposes: one subset is designed to be convenient to run
in Single Instruction Multiple Thread style, and the other to
generate from an upstream Python-embedded DSL.
As such, there are operations for explicit control transfers and
stack management, as well as for interpreted function calls (pending
lowering to explicit control transfers). The primitive computations
are encapsulated from the interpreter as Python functions. It is
not expected that one would author programs in this operation
language directly, but rather generate them with an appropriate
compiler.
Lowering consists of eliminating FunctionCallOp
in favor of a
specific sequence of lower-level instructions. A few choices for
the lowered language may register as somewhat nonstandard:
- The variable name space is global; the upstream compiler is
expected to generate unique variable names.
- There is no one stack; instead, every variable has its own stack.
This reduces push traffic, because only the variables that are
actually written to are ever pushed.
- Return addresses for pending lowered function calls are stored in
a reserved variable, that is otherwise in the same environment as
the user variables.
- There are no designated registers for function arguments or return
values. This is because all runtime storage is in Tensors, which
need to have fixed types. Instead, it is the (post-lowering)
caller's responsibility to write the arguments into the formal
parameters and to retrieve the returned value(s) from the
variable(s) in which the callee left them.
The post-lowering function call sequence is
- Push the arguments to the formal parameters;
- Pop any argument variables that are no longer used;
- Store the desired return address and jump to the beginning of the function's
body (with a single
PushGotoOp
);
- When the function returns by executing
IndirectGotoOp
, assign the
returned values to the variables that should receive them; and
- Pop the variables holding the returned values.
Note that this sequence requires that all calls in the source
language be to statically known functions, and for every function to
leave its results in the same variable(s) on every call (regardless
of internal control flow).
Args |
graph
|
A ControlFlowGraph . This is the graph of basic blocks
to execute.
|
functions
|
A list of Function s giving the definitions of all
the auto-batchable functions this Program may (recursively)
call.
|
var_defs
|
A dict mapping variable names to Type objects
giving their pattern of held Tensors. Each leaf of the pattern
is a TensorType object giving the dtype and shape of that leaf.
The shape excludes the batch dimension.
|
vars_in
|
A list of the names of the variables in which to store
the inputs when starting.
|
vars_out
|
A pattern of the names of the variables from which to
read the outputs when finished.
|
var_alloc
|
A dict mapping variable names to allocation strategies (see
VariableAllocation ). The semantics of an entry are "A proof has been
found that this strategy suffices for this variable."
|
Methods
main_function
View source
main_function()
Return a representation of the main program as a Function
.
replace
View source
replace(
var_defs=None, var_alloc=None
)
Return a copy of self
with var_defs
and/or var_alloc
replaced.
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2023-11-21 UTC.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Missing the information I need","missingTheInformationINeed","thumb-down"],["Too complicated / too many steps","tooComplicatedTooManySteps","thumb-down"],["Out of date","outOfDate","thumb-down"],["Samples / code issue","samplesCodeIssue","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2023-11-21 UTC."],[],[],null,["# tfp.experimental.auto_batching.instructions.Program\n\n\u003cbr /\u003e\n\n|-------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| [View source on GitHub](https://github.com/tensorflow/probability/blob/v0.23.0/tensorflow_probability/python/experimental/auto_batching/instructions.py#L52-L176) |\n\nAn auto-batchable program.\n\n#### View aliases\n\n\n**Main aliases**\n\n[`tfp.experimental.auto_batching.frontend.instructions.Program`](https://www.tensorflow.org/probability/api_docs/python/tfp/experimental/auto_batching/instructions/Program), [`tfp.experimental.auto_batching.frontend.st.inst.Program`](https://www.tensorflow.org/probability/api_docs/python/tfp/experimental/auto_batching/instructions/Program), [`tfp.experimental.auto_batching.frontend.stack.inst.Program`](https://www.tensorflow.org/probability/api_docs/python/tfp/experimental/auto_batching/instructions/Program), [`tfp.experimental.auto_batching.stack_optimization.inst.Program`](https://www.tensorflow.org/probability/api_docs/python/tfp/experimental/auto_batching/instructions/Program), [`tfp.experimental.auto_batching.stackless.inst.Program`](https://www.tensorflow.org/probability/api_docs/python/tfp/experimental/auto_batching/instructions/Program)\n\n\u003cbr /\u003e\n\n tfp.experimental.auto_batching.instructions.Program(\n graph, functions, var_defs, vars_in, vars_out, var_alloc=None\n )\n\nThe primary ingredient of a Program is the control flow graph of\noperations to perform. The operation language is a union that\nserves two purposes: one subset is designed to be convenient to run\nin Single Instruction Multiple Thread style, and the other to\ngenerate from an upstream Python-embedded DSL.\n\nAs such, there are operations for explicit control transfers and\nstack management, as well as for interpreted function calls (pending\nlowering to explicit control transfers). The primitive computations\nare encapsulated from the interpreter as Python functions. It is\nnot expected that one would author programs in this operation\nlanguage directly, but rather generate them with an appropriate\ncompiler.\n\nLowering consists of eliminating `FunctionCallOp` in favor of a\nspecific sequence of lower-level instructions. A few choices for\nthe lowered language may register as somewhat nonstandard:\n\n- The variable name space is global; the upstream compiler is expected to generate unique variable names.\n- There is no one stack; instead, every variable has its own stack. This reduces push traffic, because only the variables that are actually written to are ever pushed.\n- Return addresses for pending lowered function calls are stored in a reserved variable, that is otherwise in the same environment as the user variables.\n- There are no designated registers for function arguments or return values. This is because all runtime storage is in Tensors, which need to have fixed types. Instead, it is the (post-lowering) caller's responsibility to write the arguments into the formal parameters and to retrieve the returned value(s) from the variable(s) in which the callee left them.\n\nThe post-lowering function call sequence is\n\n- Push the arguments to the formal parameters;\n- Pop any argument variables that are no longer used;\n- Store the desired return address and jump to the beginning of the function's body (with a single `PushGotoOp`);\n- When the function returns by executing `IndirectGotoOp`, assign the returned values to the variables that should receive them; and\n- Pop the variables holding the returned values.\n\nNote that this sequence requires that all calls in the source\nlanguage be to statically known functions, and for every function to\nleave its results in the same variable(s) on every call (regardless\nof internal control flow).\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Args ---- ||\n|-------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| `graph` | A `ControlFlowGraph`. This is the graph of basic blocks to execute. |\n| `functions` | A list of `Function`s giving the definitions of all the auto-batchable functions this `Program` may (recursively) call. |\n| `var_defs` | A dict mapping variable names to `Type` objects giving their pattern of held Tensors. Each leaf of the pattern is a `TensorType` object giving the dtype and shape of that leaf. The shape excludes the batch dimension. |\n| `vars_in` | A list of the names of the variables in which to store the inputs when starting. |\n| `vars_out` | A pattern of the names of the variables from which to read the outputs when finished. |\n| `var_alloc` | A dict mapping variable names to allocation strategies (see `VariableAllocation`). The semantics of an entry are \"A proof has been found that this strategy suffices for this variable.\" |\n\n\u003cbr /\u003e\n\nMethods\n-------\n\n### `main_function`\n\n[View source](https://github.com/tensorflow/probability/blob/v0.23.0/tensorflow_probability/python/experimental/auto_batching/instructions.py#L145-L147) \n\n main_function()\n\nReturn a representation of the main program as a `Function`.\n\n### `replace`\n\n[View source](https://github.com/tensorflow/probability/blob/v0.23.0/tensorflow_probability/python/experimental/auto_batching/instructions.py#L135-L143) \n\n replace(\n var_defs=None, var_alloc=None\n )\n\nReturn a copy of `self` with `var_defs` and/or `var_alloc` replaced."]]