Module: tf.compat.v1.tpu
Stay organized with collections
Save and categorize content based on your preferences.
Public API for tf._api.v2.tpu namespace
Modules
experimental
module: Public API for tf._api.v2.tpu.experimental namespace
Classes
class CrossShardOptimizer
: An optimizer that averages gradients across TPU shards.
class PaddingSpec
: Represents the type of padding policies for tpu.replicate.
class XLAOptions
: XLA compilation options.
Functions
batch_parallel(...)
: Shards computation
along the batch dimension for parallel execution.
bfloat16_scope(...)
: Scope class for bfloat16 variables so that the model uses custom getter.
core(...)
: Returns the device name for a core in a replicated TPU computation.
cross_replica_sum(...)
: Sum the input tensor across replicas according to group_assignment.
initialize_system(...)
: Initializes a distributed TPU system for use with TensorFlow.
outside_compilation(...)
: Builds part of a computation outside any current TPU replicate scope.
replicate(...)
: Builds a graph operator that runs a replicated TPU computation.
rewrite(...)
: Rewrites computation
for execution on a TPU system.
shard(...)
: Shards computation
for parallel execution.
shutdown_system(...)
: Shuts down a running a distributed TPU system.
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates. Some content is licensed under the numpy license.
Last updated 2024-04-26 UTC.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Missing the information I need","missingTheInformationINeed","thumb-down"],["Too complicated / too many steps","tooComplicatedTooManySteps","thumb-down"],["Out of date","outOfDate","thumb-down"],["Samples / code issue","samplesCodeIssue","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2024-04-26 UTC."],[],[],null,["# Module: tf.compat.v1.tpu\n\n\u003cbr /\u003e\n\nPublic API for tf._api.v2.tpu namespace\n\nModules\n-------\n\n[`experimental`](../../../tf/compat/v1/tpu/experimental) module: Public API for tf._api.v2.tpu.experimental namespace\n\nClasses\n-------\n\n[`class CrossShardOptimizer`](../../../tf/compat/v1/tpu/CrossShardOptimizer): An optimizer that averages gradients across TPU shards.\n\n[`class PaddingSpec`](../../../tf/compat/v1/tpu/PaddingSpec): Represents the type of padding policies for tpu.replicate.\n\n[`class XLAOptions`](../../../tf/tpu/XLAOptions): XLA compilation options.\n\nFunctions\n---------\n\n[`batch_parallel(...)`](../../../tf/compat/v1/tpu/batch_parallel): Shards `computation` along the batch dimension for parallel execution.\n\n[`bfloat16_scope(...)`](../../../tf/compat/v1/tpu/bfloat16_scope): Scope class for bfloat16 variables so that the model uses custom getter.\n\n[`core(...)`](../../../tf/compat/v1/tpu/core): Returns the device name for a core in a replicated TPU computation.\n\n[`cross_replica_sum(...)`](../../../tf/compat/v1/tpu/cross_replica_sum): Sum the input tensor across replicas according to group_assignment.\n\n[`initialize_system(...)`](../../../tf/compat/v1/tpu/initialize_system): Initializes a distributed TPU system for use with TensorFlow.\n\n[`outside_compilation(...)`](../../../tf/compat/v1/tpu/outside_compilation): Builds part of a computation outside any current TPU replicate scope.\n\n[`replicate(...)`](../../../tf/compat/v1/tpu/replicate): Builds a graph operator that runs a replicated TPU computation.\n\n[`rewrite(...)`](../../../tf/compat/v1/tpu/rewrite): Rewrites `computation` for execution on a TPU system.\n\n[`shard(...)`](../../../tf/compat/v1/tpu/shard): Shards `computation` for parallel execution.\n\n[`shutdown_system(...)`](../../../tf/compat/v1/tpu/shutdown_system): Shuts down a running a distributed TPU system."]]