tf.saved_model.experimental.VariablePolicy
Stay organized with collections
Save and categorize content based on your preferences.
Enum defining options for variable handling when saving.
NONE
No policy applied: Distributed variables are saved as one variable, with no
device attached.
SAVE_VARIABLE_DEVICES
When saving variables, also save their device assignment.
This is useful if one wants to hardcode devices in saved models, but it also
makes them non-portable if soft device placement is disabled (more details
in tf.config.set_soft_device_placement
). This is currently not
fully supported by saved_model.load
, and is mainly intended to be used
when one will be reading the saved model at a lower API level. In the
example below, the graph saved by the call to saved_model.save
will have
the variable devices correctly specified:
exported = tf.train.Checkpoint()
with tf.device('/GPU:0'):
exported.x_gpu = tf.Variable(1.0)
with tf.device('/CPU:0'):
exported.x_cpu = tf.Variable(1.0)
tf.saved_model.save(exported, export_dir,
options = tf.saved_model.SaveOptions(
experimental_variable_policy=
tf.saved_model.experimental.VariablePolicy.SAVE_VARIABLE_DEVICES))
Distributed variables are still saved as one variable under this policy.
EXPAND_DISTRIBUTED_VARIABLES
Distributed variables will be saved with information about their components,
allowing for their restoration on load. Also, the saved graph will contain
references to those variables. This is useful when one wants to use the
model for training in environments where the original distribution strategy
is not available.
Class Variables |
EXPAND_DISTRIBUTED_VARIABLES
|
<VariablePolicy.EXPAND_DISTRIBUTED_VARIABLES: 'expand_distributed_variables'>
|
NONE
|
<VariablePolicy.NONE: None>
|
SAVE_VARIABLE_DEVICES
|
<VariablePolicy.SAVE_VARIABLE_DEVICES: 'save_variable_devices'>
|
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates. Some content is licensed under the numpy license.
Last updated 2024-04-26 UTC.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Missing the information I need","missingTheInformationINeed","thumb-down"],["Too complicated / too many steps","tooComplicatedTooManySteps","thumb-down"],["Out of date","outOfDate","thumb-down"],["Samples / code issue","samplesCodeIssue","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2024-04-26 UTC."],[],[],null,["# tf.saved_model.experimental.VariablePolicy\n\n\u003cbr /\u003e\n\n|--------------------------------------------------------------------------------------------------------------------------------------|\n| [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.16.1/tensorflow/python/saved_model/save_options.py#L27-L90) |\n\nEnum defining options for variable handling when saving.\n\n#### View aliases\n\n\n**Compat aliases for migration**\n\nSee\n[Migration guide](https://www.tensorflow.org/guide/migrate) for\nmore details.\n\n[`tf.compat.v1.saved_model.experimental.VariablePolicy`](https://www.tensorflow.org/api_docs/python/tf/saved_model/experimental/VariablePolicy)\n\n\u003cbr /\u003e\n\nNONE\nNo policy applied: Distributed variables are saved as one variable, with no\ndevice attached.\n\nSAVE_VARIABLE_DEVICES\nWhen saving variables, also save their device assignment.\nThis is useful if one wants to hardcode devices in saved models, but it also\nmakes them non-portable if soft device placement is disabled (more details\nin [`tf.config.set_soft_device_placement`](../../../tf/config/set_soft_device_placement)). This is currently not\nfully supported by [`saved_model.load`](../../../tf/saved_model/load), and is mainly intended to be used\nwhen one will be reading the saved model at a lower API level. In the\nexample below, the graph saved by the call to [`saved_model.save`](../../../tf/saved_model/save) will have\nthe variable devices correctly specified: \n\n exported = tf.train.Checkpoint()\n with tf.device('/GPU:0'):\n exported.x_gpu = tf.Variable(1.0)\n with tf.device('/CPU:0'):\n exported.x_cpu = tf.Variable(1.0)\n tf.saved_model.save(exported, export_dir,\n options = tf.saved_model.SaveOptions(\n experimental_variable_policy=\n tf.saved_model.experimental.VariablePolicy.SAVE_VARIABLE_DEVICES))\n\nDistributed variables are still saved as one variable under this policy.\n\nEXPAND_DISTRIBUTED_VARIABLES\nDistributed variables will be saved with information about their components,\nallowing for their restoration on load. Also, the saved graph will contain\nreferences to those variables. This is useful when one wants to use the\nmodel for training in environments where the original distribution strategy\nis not available.\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Class Variables --------------- ||\n|------------------------------|---------------------------------------------------------------------------------|\n| EXPAND_DISTRIBUTED_VARIABLES | `\u003cVariablePolicy.EXPAND_DISTRIBUTED_VARIABLES: 'expand_distributed_variables'\u003e` |\n| NONE | `\u003cVariablePolicy.NONE: None\u003e` |\n| SAVE_VARIABLE_DEVICES | `\u003cVariablePolicy.SAVE_VARIABLE_DEVICES: 'save_variable_devices'\u003e` |\n\n\u003cbr /\u003e"]]