tf.tpu.experimental.DeviceAssignment
Stay organized with collections
Save and categorize content based on your preferences.
Mapping from logical cores in a computation to the physical TPU topology.
tf.tpu.experimental.DeviceAssignment(
topology: tf.tpu.experimental.Topology
,
core_assignment: np.ndarray
)
Prefer to use the DeviceAssignment.build()
helper to construct a
DeviceAssignment
; it is easier if less flexible than constructing a
DeviceAssignment
directly.
Args |
topology
|
A Topology object that describes the physical TPU topology.
|
core_assignment
|
A logical to physical core mapping, represented as a
rank 3 numpy array. See the description of the core_assignment
property for more details.
|
Raises |
ValueError
|
If topology is not Topology object.
|
ValueError
|
If core_assignment is not a rank 3 numpy array.
|
Attributes |
core_assignment
|
The logical to physical core mapping.
|
num_cores_per_replica
|
The number of cores per replica.
|
num_replicas
|
The number of replicas of the computation.
|
topology
|
A Topology that describes the TPU topology.
|
Methods
build
View source
@classmethod
build(
topology: tf.tpu.experimental.Topology
,
computation_shape: Optional[np.ndarray] = None,
computation_stride: Optional[np.ndarray] = None,
num_replicas: int = 1,
device_order_mode: tf.tpu.experimental.DeviceOrderMode
= DeviceOrderMode.AUTO
) -> 'DeviceAssignment'
coordinates
View source
coordinates(
replica: int, logical_core: int
) -> Tuple
Returns the physical topology coordinates of a logical core.
host_device
View source
host_device(
replica: int = 0, logical_core: int = 0, job: Optional[str] = None
) -> str
Returns the CPU device attached to a logical core.
lookup_replicas
View source
lookup_replicas(
task_id: int, logical_core: int
) -> List[int]
Lookup replica ids by task number and logical core.
Args |
task_id
|
TensorFlow task number.
|
logical_core
|
An integer, identifying a logical core.
|
Returns |
A sorted list of the replicas that are attached to that task and
logical_core.
|
Raises |
ValueError
|
If no replica exists in the task which contains the logical
core.
|
tpu_device
View source
tpu_device(
replica: int = 0, logical_core: int = 0, job: Optional[str] = None
) -> str
Returns the name of the TPU device assigned to a logical core.
tpu_ordinal
View source
tpu_ordinal(
replica: int = 0, logical_core: int = 0
) -> int
Returns the ordinal of the TPU device assigned to a logical core.
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates. Some content is licensed under the numpy license.
Last updated 2024-04-26 UTC.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Missing the information I need","missingTheInformationINeed","thumb-down"],["Too complicated / too many steps","tooComplicatedTooManySteps","thumb-down"],["Out of date","outOfDate","thumb-down"],["Samples / code issue","samplesCodeIssue","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2024-04-26 UTC."],[],[],null,["# tf.tpu.experimental.DeviceAssignment\n\n\u003cbr /\u003e\n\n|------------------------------------------------------------------------------------------------------------------------------------|\n| [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.16.1/tensorflow/python/tpu/device_assignment.py#L68-L200) |\n\nMapping from logical cores in a computation to the physical TPU topology.\n\n#### View aliases\n\n\n**Compat aliases for migration**\n\nSee\n[Migration guide](https://www.tensorflow.org/guide/migrate) for\nmore details.\n\n[`tf.compat.v1.tpu.experimental.DeviceAssignment`](https://www.tensorflow.org/api_docs/python/tf/tpu/experimental/DeviceAssignment)\n\n\u003cbr /\u003e\n\n tf.tpu.experimental.DeviceAssignment(\n topology: ../../../tf/tpu/experimental/Topology,\n core_assignment: np.ndarray\n )\n\nPrefer to use the [`DeviceAssignment.build()`](../../../tf/tpu/experimental/DeviceAssignment#build) helper to construct a\n`DeviceAssignment`; it is easier if less flexible than constructing a\n`DeviceAssignment` directly.\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Args ---- ||\n|-------------------|--------------------------------------------------------------------------------------------------------------------------------------------------|\n| `topology` | A `Topology` object that describes the physical TPU topology. |\n| `core_assignment` | A logical to physical core mapping, represented as a rank 3 numpy array. See the description of the `core_assignment` property for more details. |\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Raises ------ ||\n|--------------|---------------------------------------------------|\n| `ValueError` | If `topology` is not `Topology` object. |\n| `ValueError` | If `core_assignment` is not a rank 3 numpy array. |\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Attributes ---------- ||\n|-------------------------|-----------------------------------------------|\n| `core_assignment` | The logical to physical core mapping. |\n| `num_cores_per_replica` | The number of cores per replica. |\n| `num_replicas` | The number of replicas of the computation. |\n| `topology` | A `Topology` that describes the TPU topology. |\n\n\u003cbr /\u003e\n\nMethods\n-------\n\n### `build`\n\n[View source](https://github.com/tensorflow/tensorflow/blob/v2.16.1/tensorflow/python/tpu/device_assignment.py#L185-L200) \n\n @classmethod\n build(\n topology: ../../../tf/tpu/experimental/Topology,\n computation_shape: Optional[np.ndarray] = None,\n computation_stride: Optional[np.ndarray] = None,\n num_replicas: int = 1,\n device_order_mode: ../../../tf/tpu/experimental/DeviceOrderMode = DeviceOrderMode.AUTO\n ) -\u003e 'DeviceAssignment'\n\n### `coordinates`\n\n[View source](https://github.com/tensorflow/tensorflow/blob/v2.16.1/tensorflow/python/tpu/device_assignment.py#L140-L142) \n\n coordinates(\n replica: int, logical_core: int\n ) -\u003e Tuple\n\nReturns the physical topology coordinates of a logical core.\n\n### `host_device`\n\n[View source](https://github.com/tensorflow/tensorflow/blob/v2.16.1/tensorflow/python/tpu/device_assignment.py#L169-L175) \n\n host_device(\n replica: int = 0, logical_core: int = 0, job: Optional[str] = None\n ) -\u003e str\n\nReturns the CPU device attached to a logical core.\n\n### `lookup_replicas`\n\n[View source](https://github.com/tensorflow/tensorflow/blob/v2.16.1/tensorflow/python/tpu/device_assignment.py#L144-L162) \n\n lookup_replicas(\n task_id: int, logical_core: int\n ) -\u003e List[int]\n\nLookup replica ids by task number and logical core.\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Args ||\n|----------------|-----------------------------------------|\n| `task_id` | TensorFlow task number. |\n| `logical_core` | An integer, identifying a logical core. |\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Returns ||\n|---|---|\n| A sorted list of the replicas that are attached to that task and logical_core. ||\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Raises ||\n|--------------|-------------------------------------------------------------------|\n| `ValueError` | If no replica exists in the task which contains the logical core. |\n\n\u003cbr /\u003e\n\n### `tpu_device`\n\n[View source](https://github.com/tensorflow/tensorflow/blob/v2.16.1/tensorflow/python/tpu/device_assignment.py#L177-L183) \n\n tpu_device(\n replica: int = 0, logical_core: int = 0, job: Optional[str] = None\n ) -\u003e str\n\nReturns the name of the TPU device assigned to a logical core.\n\n### `tpu_ordinal`\n\n[View source](https://github.com/tensorflow/tensorflow/blob/v2.16.1/tensorflow/python/tpu/device_assignment.py#L164-L167) \n\n tpu_ordinal(\n replica: int = 0, logical_core: int = 0\n ) -\u003e int\n\nReturns the ordinal of the TPU device assigned to a logical core."]]