0% found this document useful (0 votes)
39 views14 pages

L7 - Functional API

The Keras Functional API allows for more flexible model creation than the Sequential API by building models as directed acyclic graphs (DAGs) of layers. Models are constructed by connecting input and output nodes representing layers, allowing non-linear model topologies. The functional API can handle models with shared layers and multiple inputs/outputs. Models built with the functional API can be trained, evaluated, saved and loaded similarly to Sequential models. The same graph of layers can define multiple models, and models can be treated as layers by connecting them in call operations.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
39 views14 pages

L7 - Functional API

The Keras Functional API allows for more flexible model creation than the Sequential API by building models as directed acyclic graphs (DAGs) of layers. Models are constructed by connecting input and output nodes representing layers, allowing non-linear model topologies. The functional API can handle models with shared layers and multiple inputs/outputs. Models built with the functional API can be trained, evaluated, saved and loaded similarly to Sequential models. The same graph of layers can define multiple models, and models can be treated as layers by connecting them in call operations.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 14

Keras Functional API

The Functional API


• The Keras functional API is a way to create models that is more flexible
than the tf.keras.Sequential API.
• The functional API can handle models with non-linear topology, models
with shared layers, and models with multiple inputs or outputs.
• The main idea that a deep learning model is usually a directed acyclic
graph (DAG) of layers. So the functional API is a way to build graphs of
layers.
• This is a basic graph with three layers. To build this model using the functional
API, start by creating an input node:

• The shape of the data is set as a 784-dimensional vector. The batch size is
always omitted since only the shape of each sample is specified.
• If, for example, you have an image input with a shape of (32, 32, 3), you would
use:

• The inputs that is returned contains information about the shape and dtype of
the input data that you feed to your model. Here's the shape:

• Here's the dtype:


• You create a new node in the graph of layers by calling a layer on this
inputs object:

• The "layer call" action is like drawing an arrow from "inputs" to this layer
you created. You're "passing" the inputs to the dense layer, and out you
get x.
• Let's add a few more layers to the graph of layers:

• At this point, you can create a Model by specifying its inputs and outputs
in the graph of layers:
• You can also plot the model as a graph:
• And, optionally, display the input and output shapes of each layer in the
plotted graph:

• A "graph of layers" is an intuitive mental image for a deep learning model,


and the functional API is a way to create models that closely mirror this.
Training, evaluation, and inference
• Training, evaluation, and inference work exactly in the same way for
models built using the functional API as for Sequential models.
• Here, load the MNIST image data, reshape it into vectors, fit the model on
the data (while monitoring performance on a validation split), then
evaluate the model on the test data:
Save and serialize
• Saving the model and serialization work the same way for models built
using the functional API as they do for Sequential models.
• To standard way to save a functional model is to call model.save() to save
the entire model as a single file.
• You can later recreate the same model from this file, even if the code that
built the model is no longer available.
• This saved file includes the:
– model architecture
– model weight values (that were learned during training)
– model training config, if any (as passed to compile)
– optimizer and its state, if any (to restart training where you left off)
Use the same graph of layers to define multiple
models
• In the functional API, models are created by specifying their inputs and
outputs in a graph of layers.
• That means that a single graph of layers can be used to generate multiple
models.
• In the example below, you use the same stack of layers to instantiate two
models: an encoder model that turns image inputs into 16-dimensional
vectors, and an end-to-end autoencoder model for training.
• Here, the decoding architecture is strictly symmetrical to the encoding
architecture, so the output shape is the same as the input shape (28, 28,
1).
• The reverse of a Conv2D layer is a Conv2DTranspose layer, and the reverse
of a MaxPooling2D layer is an UpSampling2D layer.
All models are callable, just like layers
• You can treat any model as if it were a layer by invoking it on an Input or
on the output of another layer.
• By calling a model you aren't just reusing the architecture of the model,
you're also reusing its weights.
• To see this in action, here's a different take on the autoencoder example
that creates an encoder model, a decoder model, and chain them in two
calls to obtain the autoencoder model:
• As you can see, the model can be nested: a model can contain sub-models
(since a model is just like a layer). A common use case for model nesting is
ensembling. For example, here's how to ensemble a set of models into a
single model that averages their predictions:
Thanks

You might also like