Migrate Your TensorFlow 1 Code To TensorFlow 2 - TensorFlow Core
Migrate Your TensorFlow 1 Code To TensorFlow 2 - TensorFlow Core
API
Resources
Community Why TensorFlow
Search Language GitHub Sign in
TensorFlow Core
The rst step, before attempting to implement the changes described in this doc, is to try running the upgrade script.
This will do an initial pass at upgrading your code to TensorFlow 2.0. But it can't make your code idiomatic to 2.0. Your
code may still make use of tf.compat.v1 endpoints to access placeholders, sessions, collections, and other 1.x-style
functionality.
If your code works in TensorFlow 2.0 using tf.compat.v1.disable_v2_behavior() , there are still global behavioral
changes you may need to address. The major changes are:
Eager execution, v1.enable_eager_execution() : Any code that implicitly uses a tf.Graph will fail. Be sure to
wrap this code in a with tf.Graph().as_default() context.
This may create extra copies and can have higher memory usage.
Tensor shapes, v1.enable_v2_tensorshape() : TF 2.0 simpli es the behavior of tensor shapes. Instead of
t.shape[0].value you can say t.shape[0] . These changes should be small, and it makes sense to x them
right away. See TensorShape for examples.
Control ow, v1.enable_control_flow_v2() : The TF 2.0 control ow implementation has been simpli ed, and
so produces different graph representations. Please le bugs for any issues.
This guide will walk through several examples of converting TensorFlow 1.x code to TensorFlow 2.0. These changes will
let your code take advantage of performance optimizations and simpli ed API calls.
During conversion eager execution allows easy debugging with standard Python tools like pdb .
After that add a tf.function decorator to make it run e ciently in graph. See the Autograph Guide for more on how
this works.
Note that:
Unlike v1.Session.run a tf.function has a xed return signature, and always returns all outputs. If this
causes performance problems, create two separate functions.
All name-based variable tracking is strongly discouraged in TF 2.0. Use Python objects to to track variables.
Every v1.variable_scope should be converted to a Python object. Typically this will be one of:
tf.keras.layers.Layer
tf.keras.Model
tf.Module
These Layer and Model classes implement several other properties that remove the need for global collections. Their
.losses property can be a replacement for using the tf.GraphKeys.LOSSES collection.
Use the highest level API that works for your use case. Prefer tf.keras.Model.fit over building your own training
loops.
These high level functions manage a lot of the low-level details that might be easy to miss if you write your own training
loop. For example, they automatically collect the regularization losses, and set the training=True argument when
calling the model.
Use tf.data datasets for data input. These objects are e cient, expressive, and integrate well with tensor ow.
model.fit(dataset, epochs=5)
The tf.compat.v1 module contains the complete TensorFlow 1.x API, with its original semantics.
The TF2 upgrade script will convert symbols to their 2.0 equivalents if such a conversion is safe, i.e., if it can determine
that the behavior of the 2.0 version is exactly equivalent (for instance, it will rename v1.arg_max to tf.argmax , since
those are the same function).
After the upgrade script is done with a piece of code, it is likely there are many mentions of compat.v1 . It is worth going
through the code and converting these manually to the 2.0 equivalent (it should be mentioned in the log if there is one).
Setup
import tensorflow as tf
v1.global_variables
v1.losses.get_regularization_loss
Before converting
Here is what these patterns may look like in code using TensorFlow 1.x.
def forward(x):
with tf.variable_scope("matmul", reuse=tf.AUTO_REUSE):
W = tf.get_variable("W", initializer=tf.ones(shape=(2,2)),
regularizer=tf.contrib.layers.l2_regularizer(0.04))
b = tf.get_variable("b", initializer=tf.zeros(shape=(2)))
return W * x + b
out_a = forward(in_a)
out_b = forward(in_b)
reg_loss=tf.losses.get_regularization_loss(scope="matmul")
After converting
The regularizations are calculated manually, without referring to any global collection.
No sessions or placeholders.
W = tf.Variable(tf.ones(shape=(2,2)), name="W")
b = tf.Variable(tf.zeros(shape=(2)), name="b")
@tf.function
def forward(x):
return W * x + b
out_a = forward([1,0])
print(out_a)
tf.Tensor(
[[1. 0.]
[1. 0.]], shape=(2, 2), dtype=float32)
out_b = forward([0,1])
regularizer = tf.keras.regularizers.l2(0.04)
reg_loss=regularizer(W)
The v1.layers module is used to contain layer-functions that relied on v1.variable_scope to de ne and reuse
variables.
Before converting
After converting
The simple stack of layers ts neatly into tf.keras.Sequential . (For more complex models see custom layers
and models, and the functional API.)
The conversion was one-to-one because there is a direct mapping from v1.layers to tf.keras.layers .
The training argument is passed to each layer by the model when it runs.
The rst argument to the original model function (the input x ) is gone. This is because object layers separate
building the model from calling the model.
If you were using regularizers of initializers from tf.contrib , these have more argument changes than others.
The code no longer writes to collections, so functions like v1.losses.get_regularization_loss will no longer
return these values, potentially breaking your training loops.
model = tf.keras.Sequential([
tf.keras.layers.Conv2D(32, 3, activation='relu',
kernel_regularizer=tf.keras.regularizers.l2(0.04),
input_shape=(28, 28, 1)),
tf.keras.layers.MaxPooling2D(),
tf.keras.layers.Flatten(),
tf.keras.layers.Dropout(0.1),
tf.keras.layers.Dense(64, activation='relu'),
tf.keras.layers.BatchNormalization(),
tf.keras.layers.Dense(10)
])
tf.Tensor(
[[ 0.0997344 -0.15143616 -0.38038805 0.09059955 0.31005007 0.21420436
0.2818179 -0.26471275 0.263591 -0.44870085]], shape=(1, 10), dtype=float32)
Existing code often mixes lower-level TF 1.x variables and operations with higher-level v1.layers .
Before converting
After converting
To convert this code, follow the pattern of mapping layers to layers as in the previous example.
A v1.variable_scope is effectively a layer of its own. So rewrite it as a tf.keras.layers.Layer . See the guide for
details.
The v1.variable_scope is essentially a layer of its own. So rewrite it as a tf.keras.layers.Layer . See the guide for
details.
custom_layer = CustomLayer()
print(custom_layer([1]).numpy())
print(custom_layer([1], training=True).numpy())
[1.5]
[2.]
Subclassed Keras models & layers need to run in both v1 graphs (no automatic control dependencies) and in eager
mode
Wrap the call() in a tf.function() to get autograph and automatic control dependencies
Sometimes it is a tf.Tensor
In Model.build you have access to the input shape, so can create weights with matching shape.
They might get created either in a tf.function or in the eager context, and these tensors behave
differently.
Use tf.Variable s for state, they are always usable from both contexts
A large amount of older TensorFlow 1.x code uses the Slim library, which was packaged with TensorFlow 1.x as
tf.contrib.layers . As a contrib module, this is no longer available in TensorFlow 2.0, even in tf.compat.v1 .
Converting code using Slim to TF 2.0 is more involved than converting repositories that use v1.layers . In fact, it may
make sense to convert your Slim code to v1.layers rst, then convert to Keras.
If you use them, split normalizer_fn and activation_fn into their own layers
Separable conv layers map to one or more different Keras layers (depthwise, pointwise, and separable Keras
layers)
Slim and v1.layers have different arg names & default values
If you use Slim pre-trained models, try out Keras's pre-traimed models from tf.keras.applications or TF Hub's
TF2 SavedModels exported from the original Slim code.
Some tf.contrib layers might not have been moved to core TensorFlow but have instead been moved to the TF add-
ons package.
Training
There are many ways to feed data to a tf.keras model. They will accept Python generators and Numpy arrays as input.
The recommended way to feed data to a model is to use the tf.data package, which contains a collection of high
performance classes for manipulating data.
If you are still using tf.queue , these are now only supported as data-structures, not as input pipelines.
Using Datasets
The TensorFlow Datasets package ( tfds ) contains utilities for loading prede ned datasets as tf.data.Dataset
objects.
Downloading and preparing dataset mnist/3.0.1 (download: 11.06 MiB, generated: 21.00 MiB, total: 32.06 MiB
To keep the example short, trim the dataset to only return 5 batches:
train_data = mnist_train.map(scale).shuffle(BUFFER_SIZE).batch(BATCH_SIZE)
test_data = mnist_test.map(scale).batch(BATCH_SIZE)
STEPS_PER_EPOCH = 5
train_data = train_data.take(STEPS_PER_EPOCH)
test_data = test_data.take(STEPS_PER_EPOCH)
If you don't need low level control of your training process, using Keras's built-in fit , evaluate , and predict
methods is recommended. These methods provide a uniform interface to train the model regardless of the
implementation (sequential, functional, or sub-classed).
Here is an example of training a model using a Dataset . (For details on how this works see tutorials.)
model = tf.keras.Sequential([
tf.keras.layers.Conv2D(32, 3, activation='relu',
kernel_regularizer=tf.keras.regularizers.l2(0.02),
input_shape=(28, 28, 1)),
tf.keras.layers.MaxPooling2D(),
tf.keras.layers.Flatten(),
tf.keras.layers.Dropout(0.1),
tf.keras.layers.Dense(64, activation='relu'),
tf.keras.layers.BatchNormalization(),
tf.keras.layers.Dense(10)
])
model.fit(train_data, epochs=NUM_EPOCHS)
loss, acc = model.evaluate(test_data)
Epoch 1/5
5/5 [==============================] - 0s 6ms/step - loss: 1.6467 - accuracy: 0.4750
Epoch 2/5
5/5 [==============================] - 0s 6ms/step - loss: 0.4920 - accuracy: 0.8719
Epoch 3/5
5/5 [==============================] - 0s 6ms/step - loss: 0.3230 - accuracy: 0.9438
Epoch 4/5
5/5 [==============================] - 0s 5ms/step - loss: 0.2365 - accuracy: 0.9781
Epoch 5/5
5/5 [==============================] - 0s 6ms/step - loss: 0.1897 - accuracy: 0.9844
5/5 [==============================] - 0s 4ms/step - loss: 1.4775 - accuracy: 0.8062
Loss 1.4774526357650757, Accuracy 0.8062499761581421
If the Keras model's training step works for you, but you need more control outside that step, consider using the
tf.keras.Model.train_on_batch method, in your own data-iteration loop.
This method has many of the advantages of the methods mentioned in the previous section, but gives the user control
of the outer loop.
Note: train_on_batch and test_on_batch, by default return the loss and metrics for the single batch. If you pass
reset_metrics=False they return accumulated metrics and you must remember to appropriately reset the metric accumulators.
Also remember that some metrics like AUC require reset_metrics=False to be calculated correctly.
If you need more exibility and control, you can have it by implementing your own training loop. There are three steps:
3. Use one of the tf.keras.optimizers to apply weight updates to the model's variables.
Remember:
Always include a training argument on the call method of subclassed layers and models.
Make sure to call the model with the training argument set correctly.
Depending on usage, model variables may not exist until the model is run on a batch of data.
You need to manually handle things like regularization losses for the model.
There is no need to add manual control dependencies. Even in tf.function operations act as in eager mode.
model = tf.keras.Sequential([
tf.keras.layers.Conv2D(32, 3, activation='relu',
kernel_regularizer=tf.keras.regularizers.l2(0.02),
input_shape=(28, 28, 1)),
tf.keras.layers.MaxPooling2D(),
tf.keras.layers.Flatten(),
tf.keras.layers.Dropout(0.1),
tf.keras.layers.Dense(64, activation='relu'),
tf.keras.layers.BatchNormalization(),
tf.keras.layers.Dense(10)
])
optimizer = tf.keras.optimizers.Adam(0.001)
loss_fn = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)
@tf.function
def train_step(inputs, labels):
with tf.GradientTape() as tape:
predictions = model(inputs, training=True)
regularization_loss=tf.math.add_n(model.losses)
pred_loss=loss_fn(labels, predictions)
total_loss=pred_loss + regularization_loss
Finished epoch 0
Finished epoch 1
Finished epoch 2
Finished epoch 3
Finished epoch 4
In TensorFlow 2.0, metrics and losses are objects. These work both eagerly and in tf.function s.
cce = tf.keras.losses.CategoricalCrossentropy(from_logits=True)
cce([[1, 0]], [[-1.0,3.0]]).numpy()
4.01815
Metric.result() —get the current result of the metric, given the observed values
The object itself is callable. Calling updates the state with new observations, as with update_state , and returns the
new result of the metric.
You don't have to manually initialize a metric's variables, and because TensorFlow 2.0 has automatic control
dependencies, you don't need to worry about those either.
The code below uses a metric to keep track of the mean loss observed within a custom training loop.
@tf.function
def train_step(inputs, labels):
with tf.GradientTape() as tape:
predictions = model(inputs, training=True)
regularization_loss=tf.math.add_n(model.losses)
pred_loss=loss_fn(labels, predictions)
total_loss=pred_loss + regularization_loss
Epoch: 0
loss: 0.148
accuracy: 0.991
Epoch: 1
loss: 0.122
accuracy: 1.000
Epoch: 2
loss: 0.104
accuracy: 1.000
Epoch: 3
loss: 0.090
accuracy: 1.000
Epoch: 4
loss: 0.081
In TensorFlow 2.0 keras models are more consistent about handling metric names.
Now when you pass a string in the list of metrics, that exact string is used as the metric's name . These names are visible
in the history object returned by model.fit , and in the logs passed to keras.callbacks . is set to the string you
passed in the metric list.
model.compile(
optimizer = tf.keras.optimizers.Adam(0.001),
loss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics = ['acc', 'accuracy', tf.keras.metrics.SparseCategoricalAccuracy(name="my_accuracy")])
history = model.fit(train_data)
5/5 [==============================] - 0s 6ms/step - loss: 0.0943 - acc: 0.9969 - accuracy: 0.9969 - my_acc
history.history.keys()
This differs from previous versions where passing metrics=["accuracy"] would result in dict_keys(['loss',
'acc'])
Keras optimizers
All epsilons now default to 1e-7 instead of 1e-8 (which is negligible in most use cases).
v1.train.MomentumOptimizer can be directly replaced by the SGD optimizer using the momentum argument:
tf.keras.optimizers.SGD(..., momentum=...) .
Warning: If you see a change in convergence behavior for your models, check the default learning rates.
TensorBoard
TensorFlow 2 includes signi cant changes to the tf.summary API used to write summary data for visualization in
TensorBoard. For a general introduction to the new tf.summary , there are several tutorials available that use the TF 2
API. This includes a TensorBoard TF 2 Migration Guide
Checkpoint compatibility
Old-style name-based checkpoints can still be loaded, if you're careful. The code conversion process may result in
variable name changes, but there are workarounds.
The simplest approach it to line up the names of the new model with the names in the checkpoint:
Keras models also take a name argument as which they set as the pre x for their variables.
The v1.name_scope function can be used to set variable name pre xes. This is very different from
tf.variable_scope . It only affects names, and doesn't track variables & reuse.
If that does not work for your use-case, try the v1.train.init_from_checkpoint function. It takes an
assignment_map argument, which speci es the mapping from old names to new names.
Note: Unlike object based checkpoints, which can defer loading, name-based checkpoints require that all variables be built when the
function is called. Some models defer building variables until you call build or run the model on a batch of data.
The TensorFlow Estimator repository includes a conversion tool to upgrade the checkpoints for premade estimators
from TensorFlow 1.X to 2.0. It may serve as an example of how to build a tool fr a similar use-case.
TensorFlow 2.x saved_models work in TensorFlow 1.x—if all the ops are supported.
A Graph.pb or Graph.pbtxt
There is no straightforward way to upgrade a raw Graph.pb le to TensorFlow 2.0. Your best bet is to upgrade the code
that generated the le.
But, if you have a "Frozen graph" (a tf.Graph where the variables have been turned into constants), then it is possible to
convert this to a concrete_function using v1.wrap_function :
For example, here is a frozed graph for Inception v1, from 2016:
path = tf.keras.utils.get_file(
'inception_v1_2016_08_28_frozen.pb',
'http://storage.googleapis.com/download.tensorflow.org/models/inception_v1_2016_08_28_frozen.pb.tar.gz'
untar=True)
graph_def = tf.compat.v1.GraphDef()
loaded = graph_def.ParseFromString(open(path,'rb').read())
inception_func = wrap_frozen_graph(
graph_def, inputs='input:0',
outputs='InceptionV1/InceptionV1/Mixed_3b/Branch_1/Conv2d_0a_1x1/Relu:0')
Estimators
When you use estimators, you can use input_fn() , tf.estimator.TrainSpec , and tf.estimator.EvalSpec from
TensorFlow 1.x.
BUFFER_SIZE = 10000
BATCH_SIZE = 64
train_data = mnist_train.map(scale).shuffle(BUFFER_SIZE).batch(BATCH_SIZE)
return train_data.repeat()
There are some differences in how to construct your estimators in TensorFlow 2.0.
We recommend that you de ne your model using Keras, then use the tf.keras.estimator.model_to_estimator
utility to turn your model into an estimator. The code below shows how to use this utility when creating and training an
estimator.
def make_model():
return tf.keras.Sequential([
tf.keras.layers.Conv2D(32, 3, activation='relu',
kernel_regularizer=tf.keras.regularizers.l2(0.02),
input_shape=(28, 28, 1)),
tf.keras.layers.MaxPooling2D(),
tf.keras.layers.Flatten(),
tf.keras.layers.Dropout(0.1),
tf.keras.layers.Dense(64, activation='relu'),
tf.keras.layers.BatchNormalization(),
tf.keras.layers.Dense(10)
])
model = make_model()
model.compile(optimizer='adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
estimator = tf.keras.estimator.model_to_estimator(
keras_model = model
)
Warning:tensorflow:From /tmpfs/src/tf_docs_env/lib/python3.6/site-packages/tensorflow/python/ops/resource_
Instructions for updating:
Note: We do not support creating weighted metrics in Keras and converting them to weighted metrics in the Estimator API using
model_to_estimator You will have to create these metrics directly on the estimator spec using the add_metrics function.
If you have an existing custom estimator model_fn that you need to maintain, you can convert your model_fn to use a
Keras model.
However, for compatibility reasons, a custom model_fn will still run in 1.x-style graph mode. This means there is no
eager execution and no automatic control dependencies.
To make your custom model_fn work in TF 2.0, if you prefer minimal changes to the existing code, tf.compat.v1
symbols such as optimizers and metrics can be used.
Using a Keras models in a custom model_fn is similar to using it in a custom training loop:
Note: "Updates" are changes that need to be applied to a model after each batch. For example, the moving averages of the mean
and variance in a layers.BatchNormalization layer.
The following code creates an estimator from a custom model_fn , illustrating all of these concerns.
optimizer = tf.compat.v1.train.AdamOptimizer()
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)
if mode == tf.estimator.ModeKeys.PREDICT:
return tf.estimator.EstimatorSpec(mode=mode, predictions=predictions)
accuracy = tf.compat.v1.metrics.accuracy(labels=labels,
predictions=tf.math.argmax(predictions, axis=1),
name='acc_op')
return tf.estimator.EstimatorSpec(
mode=mode,
predictions=predictions,
loss=total_loss,
train_op=train_op, eval_metric_ops={'accuracy': accuracy})
If you want to get rid of all TF 1.x symbols and upgrade your custom model_fn to native TF 2.0, you need to update the
optimizer and metrics to tf.keras.optimizers and tf.keras.metrics .
In the custom model_fn , besides the above changes, more upgrades need to be made:
Use Optimizer.get_updates() if the loss is scalar loss Tensor (not a callable). The rst element in the
returned list is the desired train_op/minimize_op .
For the above example of my_model_fn , the migrated code with 2.0 symbols is shown as:
# Upgrade to tf.keras.metrics.
accuracy_obj = tf.keras.metrics.Accuracy(name='acc_obj')
accuracy = accuracy_obj.update_state(
y_true=labels, y_pred=tf.math.argmax(predictions, axis=1))
train_op = None
if training:
# Upgrade to tf.keras.optimizers.
optimizer = tf.keras.optimizers.Adam()
# Manually assign tf.compat.v1.global_step variable to optimizer.iterations
# to make tf.compat.v1.train.global_step increased correctly.
# This assignment is a must for any `tf.train.SessionRunHook` specified in
# estimator, as SessionRunHooks rely on global step.
optimizer.iterations = tf.compat.v1.train.get_or_create_global_step()
# Get both the unconditional updates (the None part)
# and the input-conditional updates (the features part).
update_ops = model.get_updates_for(None) + model.get_updates_for(features)
# Compute the minimize_op.
minimize_op = optimizer.get_updates(
total_loss,
model.trainable_variables)[0]
train_op = tf.group(minimize_op, *update_ops)
return tf.estimator.EstimatorSpec(
mode=mode,
predictions=predictions,
loss=total_loss,
train_op=train_op,
eval_metric_ops={'Accuracy': accuracy_obj})
Premade Estimators
3. optimizer , dnn_optimizer and linear_optimizer : this arg has been updated to tf.keras.optimizers
instead of the tf.compat.v1.train.Optimizer .
3. For optimizer args, if you do not pass in an optimizer , dnn_optimizer or linear_optimizer arg, or if you
specify the optimizer arg as a string in your code, you don't need to change anything. tf.keras.optimizers
is used by default. Otherwise, you need to update it from tf.compat.v1.train.Optimizer to its corresponding
tf.keras.optimizers
Checkpoint Converter
The migration to keras.optimizers will break checkpoints saved using TF 1.x, as tf.keras.optimizers generates a
different set of variables to be saved in checkpoints. To make old checkpoint reusable after your migration to TF 2.0, try
the checkpoint converter tool.
$ curl -O https://raw.githubusercontent.com/tensorflow/estimator/master/tensorflow_estimator/python/estima
$ python checkpoint_converter.py -h
positional arguments:
{dnn,linear,combined}
The type of estimator to be converted. So far, the
checkpoint converter only supports Canned Estimator.
So the allowed types include linear, dnn and combined.
source_checkpoint Path to source checkpoint file to be read in.
source_graph Path to source graph file to be read in.
target_checkpoint Path to checkpoint file to be written out.
optional arguments:
TensorShape
This class was simpli ed to hold int s, instead of tf.compat.v1.Dimension objects. So there is no need to call
.value() to get an int .
The following demonstrate the differences between TensorFlow 1.x and TensorFlow 2.0.
value = shape[i].value
value = shape[i]
value
16
16
None
256
If you had this in TF 1.x (Or used any other dimension method):
dim = shape[i]
dim.assert_is_compatible_with(other_dim)
other_dim = 16
Dimension = tf.compat.v1.Dimension
if shape.rank is None:
dim = Dimension(None)
else:
dim = shape.dims[i]
dim.is_compatible_with(other_dim) # or any other dimension method
True
shape = tf.TensorShape(None)
if shape:
dim = shape.dims[i]
dim.is_compatible_with(other_dim) # or any other dimension method
The boolean value of a tf.TensorShape is True if the rank is known, False otherwise.
print(bool(tf.TensorShape([]))) # Scalar
print(bool(tf.TensorShape([0]))) # 0-length vector
print(bool(tf.TensorShape([1]))) # 1-length vector
print(bool(tf.TensorShape([None]))) # Unknown-length vector
print(bool(tf.TensorShape([1, 10, 100]))) # 3D tensor
print(bool(tf.TensorShape([None, None, None]))) # 3D tensor with no known dimensions
print()
print(bool(tf.TensorShape(None))) # A tensor with unknown rank.
True
True
True
True
True
True
False
Other Changes
Remove tf.colocate_with : TensorFlow's device placement algorithms have improved signi cantly. This should
no longer be necessary. If removing it causes a performance degredation please le a bug.
Conclusions
4. Use tf.keras or tf.estimator training and evaluation loops where you can.
5. Otherwise, use custom loops, but be sure to avoid sessions & collections.
It takes a little work to convert code to idiomatic TensorFlow 2.0, but every change results in:
Easier debugging.
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the
Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its a liates.
Cite TensorFlow
Terms | Privacy Sign up for the TensorFlow monthly newsletter Subscribe Language