TensorFlow Cheatsheet Zero To Mastery V1.01
TensorFlow Cheatsheet Zero To Mastery V1.01
CHEAT SHEET
DANIEL BOURKE
V1.01
HEEELLLOOOOO!
I’m Andrei Neagoie, Founder and Lead Instructor of the Zero To Mastery Academy.
After working as a Senior Software Developer over the years, I now dedicate 100% of my time to
teaching others in-demand skills, help them break into the tech industry, and advance their
careers.
In only a few years, over 1,000,000 students around the world have taken Zero To Mastery courses
and many of them are now working at top tier companies like Apple, Google, Amazon, Tesla, IBM,
Facebook, and Shopify, just to name a few.
This cheat sheet, created by our TensorFlow instructor (Daniel Bourke) provides you with the key
TensorFlow information and concepts that you need to know.
If you want to not only learn TensorFlow but also get the exact steps to build your own projects
and get hired as a Machine Learning Engineer, then check out our Career Paths.
Happy Coding!
Andrei
It has the big advantage of being able to leverage accelerated computing hardware (GPUs and TPUs) for faster machine learning
model training.
This cheatsheet provides a quick reference guide for beginners and expert users, along with commentary to help you understand
the code.
It is focused on the Python API of TensorFlow, however, the principles can be applied to other languages.
All techniques and code mentioned in this cheatsheet are referenced via the TensorFlow documentation.
Many of the concepts discussed in this cheatsheet are covered as hands-on coding materials in the Zero To Mastery TensorFlow
Bootcamp course. You can get started by:
💻 Take the course at the ZTM Academy (the first 3 hours are free!)
🦾 Get all of the code on GitHub
📖 Read the beautiful online book version of the course at learntensorflow.io
TensorFlow Key Terms
Tensors
Tensors are the fundamental building blocks of TensorFlow and machine learning in general.
They are multi-dimensional arrays that can hold numerical data of various types, such as integers, floats, and booleans.
You can represent almost any kind of data (images, rows and columns, text) as some kind of tensor.
In TensorFlow, tensors are represented as tf.Tensor objects, which are immutable and have a specified data type and shape.
You can create a tensor with values [1, 2, 3] using:
import tensorflow as tf
However, most of the time you won’t be creating tensors by hand, as TensorFlow will provide functionality for loading (creating
tensors) and manipulating data (finding patterns in tensors).
Variables
Variables are tensors that can be modified during model training.
They are used to represent neural network model parameters, such as weights and biases.
TensorFlow Cheatsheet 1
They represent the transformations (in the form of mathematical operations) applied to the input data, and can include operations
such as convolution, pooling, and activation functions.
In TensorFlow, layers are represented as tf.keras.layers.Layer objects, which can be combined to create complex neural network
architectures.
import tensorflow as tf
The parameters (the patterns or weights and bias values that the model will learn during training).
In TensorFlow, models are typically created using the tf.keras.Model class, which provides a high-level API for building and training
models.
Loss Functions
The whole goal of machine learning is to build a model which can make some kind of prediction given a piece of input data.
Loss functions are used to measure the difference between the predicted and true values in a machine learning problem.
The higher the loss value, the more wrong the model.
So the lower the loss value, the better.
A model with a loss value of 0 is either:
Broken (because there has been a data leak and the model has seen data it was supposed to test on).
Perfect (it predicts the unseen data based on patterns its learn in the training data perfectly).
Loss functions are used to guide the training process and update the model parameters.
In TensorFlow, loss functions are represented as tf.keras.losses objects, which can be customized or chosen from a wide range of
built-in options.
Optimizers
Optimizers are used to update the model parameters during training, based on the gradients of the loss function.
They are responsible for finding the optimal values of the model parameters that minimize the loss.
In essence, an optimizer will adjust the weights and bias values in a neural network’s layers to make sure the model’s patterns
better match the input data.
If the optimizer does its job right and the model’s learned parameters represent the target data better, the loss value should go
down.
In TensorFlow, optimizers are represented as tf.keras.optimizers objects, which can be customized or chosen from a wide range of
built-in options.
Metrics
Metrics are used to evaluate the performance of a machine learning model during training and validation.
They provide additional information beyond the loss function, such as accuracy or precision.
TensorFlow Cheatsheet 2
One defining factor of metrics is that they are generally more human-readable than loss function values.
In TensorFlow, metrics are represented as tf.keras.metrics objects, which can be customized or chosen from a wide range of built-
in options.
Callbacks
Callbacks are functions that are executed at various stages of training, allowing you to customize and monitor the training process.
They can be used to implement several helpful auxiliary functions to training such as:
Early stopping — stop a model from training if it hasn’t improved in a certain amount of steps.
Learning rate scheduling — adjust a model’s learning rate (to better facilitate learning) at programmatic intervals.
In TensorFlow, callbacks are represented as tf.keras.callbacks objects, which can be customized or chosen from a wide range of
built-in options.
In TensorFlow, preprocessing is typically done using the tf.data module, which provides tools for loading, transforming, and
batching data or directly via model layers with tf.keras.layers .
Data Augmentation
Data augmentation is a technique used in computer vision to increase the size and diversity of the training dataset, by applying
transformations to the data, such as rotations, flips, and color shifts.
It can help to prevent overfitting (a model learning the patterns of the data too well) and improve the generalization of the model.
In TensorFlow, data augmentation can be implemented using various layers from tf.keras.layers , which provides a wide range of
image transformations, such as:
Transfer Learning
Transfer learning is a technique used to leverage the knowledge learned from pre-trained models, in order to improve the
performance of a new model on a related task.
For example, you could take a model trained to predict on images of cars and adjust it to be able to predict on trucks (from one
domain to another similar but slightly different domain).
The patterns learned from predicting on cars can often help with predicting on trucks.
It involves freezing some or all of the layers in the pre-trained model, and training only the new layers on the new data.
In TensorFlow, transfer learning can be implemented using the tf.keras.applications module, which provides a wide range of pre-
trained models on various tasks.
In general, when approaching a new machine learning problem, it is advised to try and use transfer learning wherever
possible.
The Zero To Mastery TensorFlow Bootcamp course covers transfer learning in sections 04, 05, and 06.
Checkpointing
Model checkpointing is the process of saving the model parameters during training, so that training can be resumed from the last
saved checkpoint in case of interruption or failure.
TensorFlow Cheatsheet 3
In TensorFlow, checkpointing can be implemented using the tf.keras.callbacks.ModelCheckpoint callback, which saves the model
weights specified steps.
1. Installing TensorFlow
There are several ways to install TensorFlow, one way is to use pip in your terminal:
For more ways to install TensorFlow such as via a Docker container, see the TensorFlow install page.
2. Importing TensorFlow
The common way to import TensorFlow with Python is to use the abbreviation tf :
import tensorflow as tf
Create Tensors
Tensors are the basic building blocks in TensorFlow. They are used to represent some kind of data in numerical form.
You can create tensors using tf.constant :
scalar = tf.constant(42)
vector = tf.constant([1, 2, 3])
matrix = tf.constant([[1, 2], [3, 4]])
Note: As mentioned before, TensorFlow will do most of the tensor creation behind the scenes for you. In a typical machine learning
workflow, it is rare to create tensors by hand.
# What are the dimenions of the tensor? (e.g. [224, 224, 3] -> [height, width, color_channels] for an image)
tensor_shape = tf.shape(tensor)
# How many dimensions are in the tensor? (e.g. [224, 224, 3] = 3 dimensions)
tensor_rank = tf.rank(tensor)
TensorFlow Cheatsheet 4
Note: The shape, rank and size of a tensor will vary widely depending on the data you’re working with. For example, 2D images
generally take on the shape of [height, width, color_channels] and text could have the shape [tokens, embedding_dimension] where a
token is a representation of a word/letter and embedding dimension is a numerical learnable tensor representing that token.
Reshape Tensors
One of the main issues in machine learning and deep learning and AI in general is making sure your tensor shapes line up.
You can also transpose a tensor (switch its dimensions) via tf.transpose :
transposed_tensor = tf.transpose(tensor_to_transpose)
addition = tf.add(a, b)
subtraction = tf.subtract(a, b)
multiplication = tf.multiply(a, b)
division = tf.divide(a, b)
Note: These are equivalent to their single operator equivalents in Python, for example, a + b == tf.add(a, b) .
variable = tf.Variable(initial_value)
GradientTape
You can use tf.GradientTape to record operations on variables for automatic differentiation.
This is helpful for creating your own training loops with TensorFlow:
For example, a simple layer could just add 1 to all the elements of an input tensor.
TensorFlow Cheatsheet 5
The important point is that a layer manipulates (performs a mathematical operation on) an input tensor in some way to produce an
output tensor.
This is why techniques such as transfer learning are helpful because they allow you to leverage what has worked for someone
else’s similar problem and tailor it to your own.
Layers
Create layers using the tf.keras.layers module.
Examples include Dense (fully connected) and Conv2D (convolutional) layers:
And there are many more pre-built layer types available in the TensorFlow documentation.
Compile Model
Before we start training the model on data we’ve got to compile it model with an optimizer, loss function, and metric.
All three of these are customizable depending on the project you’re working on.
Loss function = measures how wrong the model is (the higher the loss, the more wrong the model, so lower is better).
Common loss values include tf.keras.losses.CategoricalCrossentropy for multi-class classification problems and
tf.keras.losses.BinaryCrossentropy for binary classification problems.
Optimizer = tries to adjust the models parameters to lower the loss value. Common optimizers include tf.keras.optimizers.SGD
(Stochastic Gradient Descent) and tf.keras.optimizers.Adam . Each optimizer comes with generally good default settings,
however, of the most important values to set is the learning_rate hyperparameter.
Metric = human readable value to interpret how the model is going. Metrics can be defined via tf.keras.metrics such as
tf.keras.metrics.Accuracy or via a list of strings such as ['accuracy'] .
TensorFlow Cheatsheet 6
Train Model
You can train your TensorFlow models using the tf.keras.Model.fit method and passing it the appropriate hyperparameters.
Evaluate Model
Once you’ve trained a model, you can evaluate its performance on unseen data with tf.keras.Model.evaluate :
SavedModel format — default format for saving a model in TensorFlow, saves a complete TensorFlow program including trained
parameters and does not require the original model building code to run. Useful for sharing models across TensorFlow.js,
TFLite, TensorFlow Serving and more.
HDF5 format — a more widely used data standard, however, does not contain as much information as the SavedModel format.
You can read the pros and cons of each of these in the TensorFlow documentation on Save and load Keras models.
Once the weights are saved, you can load them back in:
This can be useful if you’d like to stop training and resume it later in the same place where you left off.
6. Data Preprocessing
There are several ways to preprocess data with TensorFlow and Keras.
You can use the Keras layers API to create standalone preprocessing pipelines or preprocessing code that becomes part of a
SavedModel .
TensorFlow Cheatsheet 7
You can see a full list of preprocessing options in the TensorFlow preprocesisng layers documentation.
Preprocessing layers include layers for:
You can also use tf.keras.layers to perform image data augmentation during training:
7. Transfer Learning
Transfer learning is a technique to leverage a pre-trained model for a new task with similar data.
For example, leveraging a model trained a diverse range of images (such as ImageNet, a collection of 1000 different classes of
images) and tailoring it to predict specifically on your own task of predicting on dogs.
Or a model trained on large amounts of text data (e.g. all of Wikipedia, all Reddit and more) and customizing it to predict whether
the text of an insurance claim was fraud or not.
Note: Transfer Learning is covered in-depth in the Zero To Mastery TensorFlow Bootcamp course sections 04, 05, 06.
For example, to load a pre-trained TensorFlow model (e.g., MobileNetV2, a fast computer vision model which can run on mobile
devices) without the top classification layer:
pretrained_model = tf.keras.applications.MobileNetV2(
input_shape=input_shape, # define input shape of your data
include_top=False, # customize top layer to your own problem
weights='imagenet') # model weights have been pretrained on ImageNet
These include models for images, text, video, audio and more.
You can also load the models via the TensorFlow Python API or JavaScript API.
For example, the LEALLA (Lightweight language-agnostic sentence embedding model supporting 109 languages) is great for
encoding text into numbers and then finding patterns in those numbers such as similarity matching (e.g. matching two pieces of text
which are similar semantically rather than just string matching).
You can load it with:
TensorFlow Cheatsheet 8
# Works for different languages
english_sentences = tf.constant(["dog", "Puppies are nice.", "I enjoy taking long walks along the beach with my dog."])
italian_sentences = tf.constant(["cane", "I cuccioli sono carini.", "Mi piace fare lunghe passeggiate lungo la spiaggia con il mio cane."])
⽝
japanese_sentences = tf.constant([" ", " ⼦⽝はいいです ", " 私は⽝と⼀緒にビーチを散歩するのが好きです "])
model = tf.keras.Sequential([
pretrained_model, # frozen base model
# Custom top layers take outputs from pretrained model and tailor to your
# own problem
tf.keras.layers.GlobalAveragePooling2D(),
tf.keras.layers.Dense(num_classes, activation='softmax')
])
model.compile(...)
model.fit(...)
For example, with the Functional API, you can create models with:
Multiple inputs (e.g images and text rather than just images).
Non-linear steps (e.g. different connections between layers rather than just sequential).
The following is an example of creating a one hidden layer model via the Functional API:
# Combine the inputs and the outputs into a model (TensorFlow works out how they're connected)
model = tf.keras.Model(inputs=inputs, outputs=outputs)
TensorFlow Cheatsheet 9
Multi-Input and Multi-Output Models with the Functional API
One of the big advantages of the Functional API is being able to create models which can handle multiple inputs and produce
multiple outputs.
The following is an example of a model that takes two input sources, performs a computation on them with a shared layer and then
outputs two outputs:
The Functional API allows you to get as creative as you’d like connecting inputs, intermediate layers and output layers.
Note: Multi-input models with the TensorFlow Functional API are covered in the Zero To Mastery TensorFlow Bootcamp course
section 09.
9. Callbacks
Callbacks are functions that are executed at various stages of training, allowing you to customize and monitor the training process.
EarlyStopping
EarlyStopping stops a model from training when a defined metric has stoped improving.
For example, if you wanted to stop your model from training if the validation loss (represented with the string ’val_loss’ ) hadn’t
improved for 5 epochs, you could write the following:
ModelCheckpoint
ModalCheckpoint saves your models weights/training progress at defined intervals.
For example, if you’d like to save your model at the end of every epoch, and only save it if the model is the best performing model
so far according to the validation loss, you could write:
# Create a checkpoint callback to save the best model every epoch (if it's better than the previously saved model)
checkpoint = tf.keras.callbacks.ModelCheckpoint(filepath='model-{epoch:02d}',
save_best_only=True,
monitor='val_loss')
TensorFlow Cheatsheet 10
For example, you could define a schedule (via a Python function) to lower the learning rate by 10x every 10 epochs.
And then use the tf.keras.callbacks.LearningRateScheduler callback to implement the schedule during training:
lr_callback = tf.keras.callbacks.LearningRateScheduler(lr_schedule)
model.fit(x=x_train,
y=y_train,
epochs=10,
batch_size=32,
validation_data=(x_val, y_val),
callbacks=[early_stopping, # pass in a list of callbacks to use
checkpoint,
lr_callback])
To do so, you can subclass the tf.keras.layers.Layer class (the class from which all other layers are built on) and implement:
1. __init__ — create all objects which are required for the layer (these are generally input-independent).
3. A call() method — setup all steps in the forward computation (when you call the layer, what should it do to the inputs?).
For example, let’s create a layer which just adds 1 to all variables on the input:
import tensorflow as tf
>>> <tf.Tensor: shape=(10,), dtype=float32, numpy=array([1., 1., 1., 1., 1., 1., 1., 1., 1., 1.], dtype=float32)>
TensorFlow Cheatsheet 11
For more on creating custom layers, see the TensorFlow tutorial on Custom layers.
# call() defines what happens when you call fit() on the model
def call(self, inputs, training=False):
# Define the forward pass
return ...
But if you’d like to create your own loss function, you can do so by subclassing tf.keras.losses.Loss (this is what all existing loss
functions already do).
For example, if we wanted to recreate mean absolute error (MAE), we could do so with:
TensorFlow has a bunch of pre-built metrics for all kinds of different problems available in tf.keras.metrics .
TensorFlow Cheatsheet 12
However, if you’d like to create your own custom metric, you can do so via subclassing tf.keras.metrics.Metric (the class all other
existing TensorFlow metrics build on):
def result(self):
# Compute and return the final result
def reset_states(self):
# Reset the metric's state
But perhaps you’d like to try out creating your own custom optimizer.
To do so, you can subclass tf.keras.optimizers.Optimizer .
Creating your own custom optimizer requires overriding the build() , update_step() and get_config() methods:
class MyCustomOptimizer(tf.keras.optimizers.Optimizer):
def __init__(self, ...):
super(MyCustomOptimizer, self).__init__(...)
def build(self):
# Create optimizer-related variables such as momentum in SGD
def update_step(self):
# Implement the optimizer's updating logic
def get_config(self):
# Return optimizer configuration
# Compute gradients
gradients = tape.gradient(loss, model.trainable_variables)
# Apply gradients
optimizer.apply_gradients(zip(gradients, model.trainable_variables))
TensorFlow Cheatsheet 13
This allows for greater flexibility, such as implementing custom learning rate schedules, gradient clipping, or using multiple
optimizers.
For more on custom training loops in TensorFlow, see the Writing a training loop from scratch guide in the TensorFlow
documenation.
Training progress
Model graphs
Metrics
Data exploration
One of most common workflows of using TensorBoard is to incorporate it as a callback during model training.
Doing so will save various model training logs to a specified directory for later viewing.
import os
log_dir = "logs"
if not os.path.exists(log_dir):
os.makedirs(log_dir)
And then we can use it during training by passing it to the fit method:
Launch TensorBoard
To view TensorBoard, run the following command in your terminal:
Or if you’re running in a notebook, you can do so by first loading the %tensorboard magic extension and then calling it as above
(except for using %tensorboard instead of tensorboard ):
TensorFlow Cheatsheet 14
# Load the TensorBoard notebook extension
%load_ext tensorboard
Advanced techniques
Note: I use the term “advanced” here loosely. Not because these techniques may take advanced knowledge to know, learn or
implement but because they generally come more into play later on in a TensorFlow/machine learning workflow.
17. Regularization
Because neural network models have incredible potential to learn patterns in data, they are prone to overfitting.
By too well, I mean it basically memorizes the patterns in the training set and doesn’t generalize to unseen data such as the test
dataset.
Such as when a student learns the course materials (training data) very well but fails the final exam (testing data).
There are several techniques to prevent overfitting and these techniques are collectively known as regularization:
Get more training data (more data means more opportunities for a model to learn diverse patterns).
Add weight decay or weight regularization (make sure the weights don’t get too close to the target data).
Weight decay/regularization
You can add weight decay/regularization to a particular layer using L1 or L2 regularization via the tf.keras.regularizers API (see the
TensorFlow documentation for more on each of these), let’s try L2 regularization:
You can also apply regularization via string identifier, such as applying L1 and L2 regularization simultaneously with 'l1_l2' :
You can also apply weight decay right into the optimizer using the weight_decay parameter such as in tf.keras.optimizers.AdamW :
adam_with_weight_decay = tf.keras.optimizers.AdamW(learning_rate=0.001,
weight_decay=0.004)
Dropout
Dropout is another common regularization technique which randomly sets a specified fraction of input values to zero.
TensorFlow Cheatsheet 15
The thought process here is that if a certain number (usually around 10-20%) of layer weights are set to zero, the remaining weights
should be forced to be better representations of the data.
Dropout is usually placed after and in between layers and can be created via tf.keras.layers.Dropout .
The practice of finding the best values for these is known as hyperparameter tuning.
You can either run multiple experiments adjusting the values by hand.
Or you can write code to do it for you.
The Keras Tuner library allows you to programmtically tune the hyperparameters of your TensorFlow and Keras models.
RandomSearch
Hyperband
BayesianOptimization
Sklearn
The documentation explains each of these in more detail, however, here is an example of using RandomSearch to tune the hidden
units in the first layer and the learning rate of the optimizer:
import tensorflow as tf
imporkeras_tuner import RandomSearch
# Create dataset
(img_train, label_train), (img_test, label_test) = tf.keras.datasets.fashion_mnist.load_data()
# Normalize dataset
img_train = img_train.astype('float32') / 255.0
img_test = img_test.astype('float32') / 255.0
TensorFlow Cheatsheet 16
return model
However, lower-precision datatypes such as float16 can be used to speed up training (less precision means less numbers to
compute on) without a significant loss to performance.
If your GPU has a compute capability score of 7.0 or higher, it can benefit from using mixed precision training.
TensorFlow enables automatic mixed precision training (e.g. it will allocate the float16 datatype to layers which are compatible with
using it and will default to float32 when necessary) via the tensorflow.keras.mixed_precision API:
# Train the model as usual and mixed precision will be applied where possible
For example, let’s say you have a large neural network which performs well but runs into memory issues when using a batch size of
32 (larger batch sizes generally enables better learning opportunities).
This code example shows how the forward pass, loss calculation and gradient calculation happens on the whole batch but the
gradient updates only happen once every 4 steps:
TensorFlow Cheatsheet 17
# Forward pass gets performed on the whole batch
y_pred = model(x_batch, training=True)
# Compute loss on the whole batch
loss = loss_function(y_batch, y_pred)
It helps you understand what operations (ops) are consuming what hardware resources (time and memory wise) and thus allows
you to modify various parts which may be taking more time than necessary.
You can get started with the TensorFlow Profiler in several ways via the various profiling APIs:
The following is an example of using the TensorFlow Profiler with the TensorBoard Callback:
import tensorflow as tf
from tensorflow.keras.callbacks import TensorBoard
# Inside a notebook
%load_ext tensorboard
%tensorboard --logdir logs
This means you can take your existing code and run it at a larger scale and in turn create larger models and train on larger datasets.
TensorFlow Cheatsheet 18
TensorFlow allows for distributed training to take place via the tf.distribute.Strategy API.
There are several types of training strategy to use:
Single machine, single GPU — this is the default strategy and does not require tf.distribute.Strategy .
MirroredStrategy ( tf.distribute.MirroredStrategy ) — this mode supports synchronous distributed training on multiple GPUs on a
single machine. Each variable is replicated across each GPU.
TPUStrategy ( tf.distribute.TPUStrategy ) — this mode allows the running of TensorFlow code on Tensor Processing Units
(TPUs). TPUs are Google’s custom chips designed for machine learning workflows.
MultiWorkerMirroredStrategy ( tf.distribute.MultiWorkerMirroredStrategy ) — this mode allows for use of multiple machines with
multiple GPUs (e.g. a cluster of 10 machines each with 8 GPUs).
a single machine, for example, you may have 4 GPUs on a single machine:
import tensorflow as tf
import tensorflow as tf
Note: To use tf.distributed.MultiWorkerMirroredStrategy , youll need to create a TF_CONFIG environment variable on each worker
machine with the cluster specification and the current worker's task information. You can see a full guide on this in the TensorFlow
documentation on using Multiple workers with Keras.
But are very useful add ons for many different machine learning workflows.
TensorFlow Cheatsheet 19
23. TensorFlow Datasets
TensorFlow Datasets (TFDS) is a collection of ready-to-use datasets for TensorFlow (and other machine learning frameworks).
It includes hundreds of different ready to download datasets across various modalities from images to text to audio to biology to
computer science.
You can get started by installing the tensorflow-datasets package:
For a full tutorial on TensorFlow Datasets, see the TensorFlow Datasets tutorial.
And for a full list of datasets available in TensorFlow Datasets, see the dataset catalog.
A common workflow is to train models with TensorFlow and Keras via the Python API and then convert the trained model to the
TensorFlow Lite format for deployment on a mobile or edge device.
You can do so by first install TensorFlow Lite via the tflite package:
Then you can either convert a SavedModel (a model you’ve already trained and saved, the recommended approach) or convert a
Keras model to the .tflite format:
import tensorflow as tf
And a machine learning pipeline can be constructed by stringing together TFX components:
TensorFlow Cheatsheet 20
import tensorflow as tf
import tfx
TensorFlow Serving
TensorFlow Serving is one part of the TFX ecosystem which provides a high-performance serving system for deploying machine
learning models.
For example, here’s a way to serve a demo model via TensorFlow Serving using Docker, the model will create an API for a
prediction that returns half plus two of the input (e.g. an input of [1, 2, 3] becomes [2.5, 3.0, 3.5]):
# Start TensorFlow Serving container and open the REST API port
docker run -t --rm -p 8501:8501 \
-v "$TESTDATA/saved_model_half_plus_two_cpu:/models/half_plus_two" \
-e MODEL_NAME=half_plus_two \
tensorflow/serving &
These extensions are in place because machine learning moves so fast it’s often hard to incorporate everything in a central library
such as TensorFlow, hence the addons.
All addons follow similar API standards as the regular TensorFlow library.
You can get started with TensorFlow Addons by installing the tensorflow-addons package:
TensorFlow Cheatsheet 21
import tensorflow_addons as tfa
With TF-DF you can perform classification, regression and ranking tasks.
Decision forest models typically perform exceptionally well on structured data (e.g. rows and columns) where as neural networks
typically perform exceptionally on unstructured data (e.g. images, text, video, audio).
You can get started with TensorFlow Decision Forests by installing the tensorflow_decision_forests package:
For more, you can find the full codebase for the TensorFlow Decisions Forests library on GitHub.
It allows you to create probability distributions such as Bernoulli with tfp.distributions.Bernoulli and Normal with
tfp.distributions.Normal .
It provides tools for performing variational inference, Markov chain and Monte Carlo sampling and more.
You can get started with TensorFlow Probability by installing the tensorflow-probability package:
TensorFlow Cheatsheet 22
import tensorflow_probability as tfp
Combining computer vision and computer graphics techniques provides enables machine learning practitioners to leverage already
existing graphical data for machine learning models.
To get started with TensorFlow Graphics, you can install the package tensorflow-graphics :
And as an example, to compute the triangle areas of a given tensor, you can use:
You can see a full workflow of creating and running a TensorFlow model with TensorFlow Cloud in the TensorFlow documentation.
# Create model
# ...
31. TensorFlow.js
TensorFlow.js is a library for machine learning in JavaScript, enabling you to train and deploy models directly in web browsers or
Node.js environments.
The benefit of deploying a machine learning model right into the browser is that it enables a whole bunch of benefits:
Data can be computed on in the browser (without being sent to a server), which is good for privacy and latency.
TensorFlow Cheatsheet 23
Predictions in the browser can leverage the persons device hardware and save on server costs.
You can create a machine learning application with one code stack (e.g. all in JavaScript).
Otherwise, these are several more options for installing such as via yarn or directly in HTML:
<html>
<head>
<script src="https://cdn.jsdelivr.net/npm/@tensorflow/tfjs@latest"> </script>
</head>
<body>
<h4>Tiny TFJS example<hr/></h4>
<div id="micro-out-div">Training...</div>
<script src="./index.js"> </script>
</body>
</html>
Once TensorFlow.js is ready to go, the usage is very similar to the Python API:
// Prepare the model for training: Specify the loss and the optimizer.
model.compile({loss: 'meanSquaredError', optimizer: 'sgd'});
You can see more on TensorFlow.js such as using pretrained models and creating size-optimized models (for running directly in
browsers) in the TensorFlow.js documentation.
Summary
As you’ve seen, the TensorFlow library is vast and covers almost every area of machine learning you can imagine.
By following this cheatsheet, you’ve now got the resources and ideas to start performing a variety of machine learning tasks with
TensorFlow.
You can keep this guide as a reference when working on your own projects.
And don't forget, there’s plenty more to explore in the TensorFlow documentation, follow your curiosity and see where it leads you.
TensorFlow Cheatsheet 24