tfp.experimental.auto_batching.stackless.ExecutionQueue
Stay organized with collections
Save and categorize content based on your preferences.
A priority queue of resumption points.
tfp.experimental.auto_batching.stackless.ExecutionQueue(
backend
)
Each resumption point is a pair of program counter to resume, and mask of
threads that are waiting there.
This class is a simple wrapper around Python's standard heapq implementation
of priority queues. There are just two subtleties:
Dequeue gets all the threads that were waiting at that point, by coalescing
multiple entries if needed.
Enqueue drops entries with empty masks, because they need never be resumed.
Methods
dequeue
View source
dequeue()
enqueue
View source
enqueue(
program_counter, mask
)
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2023-11-21 UTC.
[null,null,["Last updated 2023-11-21 UTC."],[],[],null,["# tfp.experimental.auto_batching.stackless.ExecutionQueue\n\n\u003cbr /\u003e\n\n|-----------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| [View source on GitHub](https://github.com/tensorflow/probability/blob/v0.23.0/tensorflow_probability/python/experimental/auto_batching/stackless.py#L244-L278) |\n\nA priority queue of resumption points.\n\n#### View aliases\n\n\n**Main aliases**\n\n[`tfp.experimental.auto_batching.frontend.st.ExecutionQueue`](https://www.tensorflow.org/probability/api_docs/python/tfp/experimental/auto_batching/stackless/ExecutionQueue)\n\n\u003cbr /\u003e\n\n tfp.experimental.auto_batching.stackless.ExecutionQueue(\n backend\n )\n\nEach resumption point is a pair of program counter to resume, and mask of\nthreads that are waiting there.\n\nThis class is a simple wrapper around Python's standard heapq implementation\nof priority queues. There are just two subtleties:\n\n- Dequeue gets all the threads that were waiting at that point, by coalescing\n multiple entries if needed.\n\n- Enqueue drops entries with empty masks, because they need never be resumed.\n\nMethods\n-------\n\n### `dequeue`\n\n[View source](https://github.com/tensorflow/probability/blob/v0.23.0/tensorflow_probability/python/experimental/auto_batching/stackless.py#L271-L278) \n\n dequeue()\n\n### `enqueue`\n\n[View source](https://github.com/tensorflow/probability/blob/v0.23.0/tensorflow_probability/python/experimental/auto_batching/stackless.py#L266-L269) \n\n enqueue(\n program_counter, mask\n )"]]