0% found this document useful (0 votes)
4 views

Tensorflow

Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views

Tensorflow

Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 29

Introduction to Tensor with Tensorflow

(https://www.geeksforgeeks.org/introduction-tensor-tensorflow/?
ref=lbp)

TensorFlow is an open-source end-to-end platform for creating Machine


Learning applications. It is a symbolic math library that uses dataflow and
differentiable programming to perform various tasks focused on training and
inference of deep neural networks. It allows developers to create machine
learning applications using various tools, libraries, and community resources.

TensorFlow is an open-source software library for dataflow programming across


a range of tasks. Google open-sourced TensorFlow in November 2015. Since
then, TensorFlow has become the most starred machine learning repository on
Github. (https://github.com/tensorflow/tensorflow). Currently, the most famous
deep learning library in the world is Google’s TensorFlow. Google product
uses machine learning in all of its products to improve the search engine,
translation, image captioning or recommendations.

Why TensorFlow? TensorFlow’s popularity is due to many things, but primarily


because of the computational graph concept, automatic differentiation, and the
adaptability of the Tensorflow python API structure. This makes solving real
problems with TensorFlow accessible to most programmers.
Google’s Tensorflow engine has a unique way of solving problems. This unique
way allows for solving machine learning problems very efficiently. We will cover
the basic steps to understand how Tensorflow operates.

What is Tensor in Tensorflow

TensorFlow, as the name indicates, is a framework to define and run


computations involving tensors. A tensor is a generalization of vectors and
matrices to potentially higher dimensions. Internally, TensorFlow
represents tensors as n-dimensional arrays of base datatypes. Each
element in the Tensor has the same data type, and the data type is always
known. The shape (that is, the number of dimensions it has and the size of
each dimension) might be only partially known. Most operations produce
tensors of fully-known shapes if the shapes of their inputs are also fully
known, but in some cases it’s only possible to find the shape of a tensor at
graph execution time.

General TensorFlow Algorithm Outlines

Here we will introduce the general flow of Tensorflow Algorithms.


1. Import or generate data
All of our machine learning algorithms will depend on data. In practice, we will
either generate data or use an outside source of data. Sometimes it is better
to rely on generated data because we will want to know the expected
outcome. And also tensorflow comes preloaded with famous datasets like
MNIST, CIFAR-10, etc.
2. Transform and normalize data
The data is usually not in the correct dimension or type that our Tensorflow
algorithms expect. We will have to transform our data before we can use it.
Most algorithms also expect normalized data. Tensorflow has built in
functions that can normalize the data for you.
data = tf.nn.batch_norm_with_global_normalization(...)

3. Set algorithm parameters


Our algorithms usually have a set of parameters that we hold constant
throughout the procedure. For example, this can be the number of iterations, the
learning rate, or other fixed parameters of our choosing. It is considered good
form to initialize these together so the reader or user can easily find them.
learning_rate = 0.001 iterations = 1000

4. Initialize variables and placeholders


Tensorflow depends on us telling it what it can and cannot modify. Tensorflow
will modify the variables during optimization to minimize a loss function. To
accomplish this, we feed in data through placeholders. We need to initialize both
of these, variables and placeholders with size and type, so that Tensorflow
knows what to expect.
a_var = tf.constant(42) x_input =
tf.placeholder(tf.float32, [None, input_size]) y_input =
tf.placeholder(tf.fload32, [None, num_classes])

5. Define the model structure


After we have the data, and initialized our variables and placeholders, we
have to define the model. This is done by building a computational graph.
We tell Tensorflow what operations must be done on the variables and
placeholders to arrive at our model predictions.
y_pred = tf.add(tf.mul(x_input, weight_matrix), b_matrix)

6.Declare the loss functions


After defining the model, we must be able to evaluate the output. This is
where we declare the loss function. The loss function is very important as
it tells us how far off our predictions are from the actual values.
loss = tf.reduce_mean(tf.square(y_actual – y_pred))

7.Initialize and train the model


Now that we have everything in place, we create an instance or our graph
and feed in the data through the placeholders and let Tensorflow change
the variables to better predict our training data. Here is one way to initialize
the computational graph.

with tf.Session(graph=graph) as session:

...
session.run(...)

...

Note that we can also initiate our graph with

session = tf.Session(graph=graph) session.run(…)

8. Evaluate the model(Optional)


Once we have built and trained the model, we should evaluate the model by
looking at how well it does on new data through some specified criteria.

9. Predict new outcomes(Optional)


It is also important to know how to make predictions on new, unseen, data. We
can do this with all of our models, once we have them trained.

Summary

In Tensorflow, we have to setup the data, variables, placeholders, and


model before we tell the program to train and change the variables to
improve the predictions. Tensorflow accomplishes this through the
computational graph. We tell it to minimize a loss function and Tensorflow
does this by modifying the variables in the model. Tensorflow knows how
to modify the variables because it keeps track of the computations in the
model and automatically computes the gradients for every variable.
Because of this, we can see how easy it can be to make changes and try
different data sources.
Overall, algorithms are designed to be cyclic in TensorFlow. We set up this
cycle as a computational graph and (1) feed in data through the
placeholders, (2) calculate the output of the computational graph, (3)
compare the output to the desired output with a loss function, (4) modify
the model variables according to the automatic back propagation, and
finally (5) repeat the process until a stopping criteria is met.

TensorFlow - Mathematical Foundations


It is important to understand mathematical concepts needed for TensorFlow before

creating the basic application in TensorFlow. Mathematics is considered as the

heart of any machine learning algorithm. It is with the help of core concepts of

Mathematics, a solution for specific machine learning algorithm is defined.

Vector
An array of numbers, which is either continuous or discrete, is defined as a vector.

Machine learning algorithms deal with fixed length vectors for better output

generation.

Machine learning algorithms deal with multidimensional data so vectors play a

crucial role.

The pictorial representation of vector model is as shown below −

Scalar
Scalar can be defined as one-dimensional vector. Scalars are those, which include

only magnitude and no direction. With scalars, we are only concerned with the

magnitude.

Examples of scalar include weight and height parameters of children.

Matrix
Matrix can be defined as multi-dimensional arrays, which are arranged in the

format of rows and columns. The size of matrix is defined by row length and

column length. Following figure shows the representation of any specified matrix.

Consider the matrix with “m” rows and “n” columns as mentioned above, the matrix

representation will be specified as “m*n matrix” which defined the length of matrix

as well.

Mathematical Computations
In this section, we will learn about the different Mathematical Computations in

TensorFlow.

Addition of matrices
Addition of two or more matrices is possible if the matrices are of the same

dimension. The addition implies addition of each element as per the given position.

Subtraction of matrices
The subtraction of matrices operates in similar fashion like the addition of two

matrices. The user can subtract two matrices provided the dimensions are equal.

Multiplication of matrices
For two matrices A m*n and B p*q to be multipliable, n should be

equal to p. The resulting matrix is −

C m*q

Transpose of matrix
The transpose of a matrix A, m*n is generally represented by AT (transpose) n*m

and is obtained by transposing the column vectors as row vectors.

Dot product of vectors


Any vector of dimension n can be represented as a matrix v = R^n*1.

Tensor Data Structure


(https://chromium.googlesource.com/external/github.com/tensorflow/
tensorflow/+/r0.10/tensorflow/g3doc/resources/dims_types.md )
Tensors are used as the basic data structures in TensorFlow language. Tensors

represent the connecting edges in any flow diagram called the Data Flow Graph.

Tensors are defined as a multidimensional array or list.

Tensors are identified by the following three parameters −

Rank
Unit of dimensionality described within the tensor is called rank. It identifies the

number of dimensions of the tensor. A rank of a tensor can be described as the

order or n-dimensions of a tensor defined.

Shape
The number of rows and columns together define the shape of Tensor.

Type
Type describes the data type assigned to Tensor’s elements.

A user needs to consider the following activities for building a

Tensor −

● Build an n-dimensional array


● Convert the n-dimensional array.

Various Dimensions of TensorFlow


TensorFlow includes various dimensions. The dimensions are described

in brief below −

One dimensional Tensor


One dimensional tensor is a normal array structure which includes one set of values

of the same data type.

Declaration

>>> import numpy as np


>>> tensor_1d = np.array([1.3, 1, 4.0, 23.99])
>>> print tensor_1d

The implementation with the output is shown in the screenshot below −


The indexing of elements is same as Python lists. The first element starts with

index of 0; to print the values through index, all you need to do is mention the index

number.

>>> print tensor_1d[0]


1.3
>>> print tensor_1d[2]
4.0

Two dimensional Tensors


Sequence of arrays are used for creating “two dimensional tensors”.

The creation of two-dimensional tensors is described below −


Following is the complete syntax for creating two dimensional arrays

>>> import numpy as np


>>> tensor_2d = np.array([(1,2,3,4),(4,5,6,7),(8,9,10,11),(12,13,14,15)])
>>> print(tensor_2d)
[[ 1 2 3 4]
[ 4 5 6 7]
[ 8 9 10 11]
[12 13 14 15]]
>>>

Tensor Handling and Manipulations


In this section, we will learn about Tensor Handling and Manipulations.

To begin with, let us consider the following code −

import tensorflow as tf
import numpy as np

matrix1 = np.array([(2,2,2),(2,2,2),(2,2,2)],dtype = 'int32')


matrix2 = np.array([(1,1,1),(1,1,1),(1,1,1)],dtype = 'int32')

print (matrix1)
print (matrix2)

matrix1 = tf.constant(matrix1)
matrix2 = tf.constant(matrix2)
matrix_product = tf.matmul(matrix1, matrix2)
matrix_sum = tf.add(matrix1,matrix2)
matrix_3 = np.array([(2,7,2),(1,4,2),(9,0,2)],dtype = 'float32')
print (matrix_3)

matrix_det = tf.matrix_determinant(matrix_3)
with tf.Session() as sess:
result1 = sess.run(matrix_product)
result2 = sess.run(matrix_sum)
result3 = sess.run(matrix_det)

print (result1)
print (result2)
print (result3)
Output

The above code will generate the following output −

Explanation
We have created multidimensional arrays in the above source code. Now, it is
important to understand that we created graph and sessions, which manage the
Tensors and generate the appropriate output. With the help of graph, we have the
output specifying the mathematical calculations between Tensors.

Visualizing the Variable Creation in TensorBoard


To visualize the creation of variables in Tensorboard, we will reset the
computational graph and create a global initializing operation.

# Reset graph

ops.reset_default_graph()

# Start a graph session

sess = tf.Session()

# Create variable

my_var = tf.Variable(tf.zeros([1,20]))

# Add summaries to tensorboard

merged = tf.summary.merge_all()

# Initialize graph writer:

writer = tf.summary.FileWriter("/tmp/variable_logs",
graph=sess.graph)

# Initialize operation

initialize_op = tf.global_variables_initializer()
# Run initialization of variable

sess.run(initialize_op)

Now run the following command in cmd.

tensorboard --logdir=/tmp

TensorFlow Components

Tensor

Tensorflow’s name is directly derived from its core framework: Tensor. In Tensorflow,
all the computations involve tensors. A tensor is a vector or matrix of n-dimensions that
represents all types of data. All values in a tensor hold identical data type with a known
(or partially known) shape. The shape of the data is the dimensionality of the matrix or
array.

A tensor can be originated from the input data or the result of a computation. In
TensorFlow, all the operations are conducted inside a graph. The graph is a set of
computation that takes place successively. Each operation is called an op node and
are connected to each other.

The graph outlines the ops and connections between the nodes. However, it does not
display the values. The edge of the nodes is the tensor, i.e., a way to populate the
operation with data.

Graphs (https://www.tensorflow.org/guide/intro_to_graphs#setup)
TensorFlow makes use of a graph framework. The graph gathers and describes all the
series computations done during the training. The graph has lots of advantages:

● It was done to run on multiple CPUs or GPUs and even mobile operating system

● The portability of the graph allows to preserve the computations for immediate or
later use. The graph can be saved to be executed in the future.

● All the computations in the graph are done by connecting tensors together

○ A tensor has a node and an edge. The node carries the mathematical
operation and produces an endpoints outputs. The edges the edges
explain the input/output relationships between nodes.

What are graphs?

Graph execution means that tensor computations are executed as a TensorFlow graph,
sometimes referred to as a tf.Graph or simply a "graph."

Graphs are data structures that contain a set of tf.Operation objects, which represent units of
computation; and tf.Tensor objects, which represent the units of data that flow between
operations. They are defined in a tf.Graph context. Since these graphs are data structures, they
can be saved, run, and restored all without the original Python code.

This is what a TensorFlow graph representing a two-layer neural network looks like when
visualized in TensorBoard:
The benefits of graphs

With a graph, you have a great deal of flexibility. You can use your TensorFlow graph in
environments that don't have a Python interpreter, like mobile applications, embedded devices,
and backend servers. TensorFlow uses graphs as the format for saved models when it exports
them from Python.

Graphs are also easily optimized, allowing the compiler to do transformations like:

Statically infer the value of tensors by folding constant nodes in your computation ("constant
folding").
Separate sub-parts of a computation that are independent and split them between threads or
devices.
Simplify arithmetic operations by eliminating common subexpressions.
There is an entire optimization system, Grappler, to perform this and other speedups.

In short, graphs are extremely useful and let your TensorFlow run fast, run in parallel, and run
efficiently on multiple devices.

However, you still want to define your machine learning models (or other computations) in
Python for convenience, and then automatically construct graphs when you need them.

https://www.tensorflow.org/guide/intro_to_graphs#setup

Setup

Import some necessary libraries:

import tensorflow as tf
import timeit
from datetime import datetime

Taking advantage of graphs

You create and run a graph in TensorFlow by using tf.function, either as a direct call or as a
decorator. tf.function takes a regular function as input and returns a Function. A Function is a
Python callable that builds TensorFlow graphs from the Python function. You use a Function in
the same way as its Python equivalent.

Why is TensorFlow Popular?


TensorFlow is the best library of all because it is built to be accessible for everyone.
Tensorflow library incorporates different API to built at scale deep learning architecture
like CNN or RNN. TensorFlow is based on graph computation; it allows the developer to
visualize the construction of the neural network with Tensorboad. This tool is helpful to
debug the program. Finally, Tensorflow is built to be deployed at scale. It runs on CPU
and GPU.

Tensorflow attracts the largest popularity on GitHub compare to the other deep learning
framework.

TensorFlow Algorithms

Below are the algorithms supported by TensorFlow:

Currently, TensorFlow 1.10 has a built-in API for:

● Linear regression: tf.estimator.LinearRegressor

● Classification:tf.estimator.LinearClassifier

● Deep learning classification: tf.estimator.DNNClassifier

● Deep learning wipe and deep: tf.estimator.DNNLinearCombinedClassifier

● Booster tree regression: tf.estimator.BoostedTreesRegressor

● Boosted tree classification: tf.estimator.BoostedTreesClassifier

How Calculations work in TensorFlow


import numpy as np

import tensorflow as tf

In the first two line of code, we have imported tensorflow as tf. With Python, it is a
common practice to use a short name for a library. The advantage is to avoid to type the
full name of the library when we need to use it. For instance, we can import tensorflow
as tf, and call tf when we want to use a tensorflow function

Let’s practice the elementary workflow of Tensorflow with simple TensorFlow examples.
Let’s create a computational graph that multiplies two numbers together.

During the example, we will multiply X_1 and X_2 together. Tensorflow will create a
node to connect the operation. In our example, it is called multiply. When the graph is
determined, Tensorflow computational engines will multiply together X_1 and X_2.

TensorFlow Example
Finally, we will run a TensorFlow session that will run the computational graph with the
values of X_1 and X_2 and print the result of the multiplication.

Let’s define the X_1 and X_2 input nodes. When we create a node in Tensorflow, we
have to choose what kind of node to create. The X1 and X2 nodes will be a placeholder
node. The placeholder assigns a new value each time we make a calculation. We will
create them as a TF dot placeholder node.

Step 1: Define the variable

X_1 = tf.placeholder(tf.float32, name = "X_1")

X_2 = tf.placeholder(tf.float32, name = "X_2")

When we create a placeholder node, we have to pass in the data type will be adding
numbers here so we can use a floating-point data type, let’s use tf.float32. We also
need to give this node a name. This name will show up when we look at the graphical
visualizations of our model. Let’s name this node X_1 by passing in a parameter called
name with a value of X_1 and now let’s define X_2 the same way. X_2.

Step 2: Define the computation

multiply = tf.multiply(X_1, X_2, name = "multiply")

Now we can define the node that does the multiplication operation. In Tensorflow we
can do that by creating a tf.multiply node.

We will pass in the X_1 and X_2 nodes to the multiplication node. It tells tensorflow to
link those nodes in the computational graph, so we are asking it to pull the values from x
and y and multiply the result. Let’s also give the multiplication node the name multiply. It
is the entire definition for our simple computational graph.

Step 3: Execute the operation


To execute operations in the graph, we have to create a session. In Tensorflow, it is
done by tf.Session(). Now that we have a session we can ask the session to run
operations on our computational graph by calling session. To run the computation, we
need to use run.

When the addition operation runs, it is going to see that it needs to grab the values of
the X_1 and X_2 nodes, so we also need to feed in values for X_1 and X_2. We can do
that by supplying a parameter called feed_dict. We pass the value 1,2,3 for X_1 and
4,5,6 for X_2.

We print the results with print(result). We should see 4, 10 and 18 for 1×4, 2×5 and 3×6

X_1 = tf.placeholder(tf.float32, name = "X_1")

X_2 = tf.placeholder(tf.float32, name = "X_2")

multiply = tf.multiply(X_1, X_2, name = "multiply")

with tf.Session() as session:

result = session.run(multiply, feed_dict={X_1:[1,2,3], X_2:


[4,5,6]})

print(result)

[ 4. 10. 18.]
Options to Load Data into TensorFlow

The first step before training a machine learning algorithm is to load the data. There is
two commons way to load data:

1. Load data into memory: It is the simplest method. You load all your data into
memory as a single array. You can write a Python code. This lines of code are
unrelated to Tensorflow.

2. Tensorflow data pipeline: Tensorflow has built-in API that helps you to load the
data, perform the operation and feed the machine learning algorithm easily. This
method works very well especially when you have a large dataset. For instance, image
records are known to be enormous and do not fit into memory. The data pipeline
manages the memory by itself

What solution to use?

Load data in memory

If your dataset is not too big, i.e., less than 10 gigabytes, you can use the first method.
The data can fit into the memory. You can use a famous library called Pandas to import
CSV files. You will learn more about pandas in the next tutorial.

Load data with Tensorflow pipeline

The second method works best if you have a large dataset. For instance, if you have a
dataset of 50 gigabytes, and your computer has only 16 gigabytes of memory then the
machine will crash.

In this situation, you need to build a Tensorflow pipeline. The pipeline will load the data
in batch, or small chunk. Each batch will be pushed to the pipeline and be ready for the
training. Building a pipeline is an excellent solution because it allows you to use parallel
computing. It means Tensorflow will train the model across multiple CPUs. It fosters the
computation and permits for training powerful neural network.
You will see in the next tutorials on how to build a significant pipeline to feed your neural
network.

In a nutshell, if you have a small dataset, you can load the data in memory with Pandas
library.

If you have a large dataset and you want to make use of multiple CPUs, then you will be
more comfortable to work with Tensorflow pipeline.

How to Create TensorFlow Pipeline

Here are the steps to create a TensorFlow pipeline:

In the example before, we manually added three values for X_1 and X_2. Now, we will
see how to load data to Tensorflow:

Step 1) Create the data

First of all, let’s use numpy library to generate two random values.

import numpy as np

x_input = np.random.sample((1,2))

print(x_input)

[[0.8835775 0.23766977]]

Step 2) Create the placeholder


Like in the previous example, we create a placeholder with the name X. We need to
specify the shape of the tensor explicitly. In case, we will load an array with only two
values. We can write the shape as shape=[1,2]

# using a placeholder

x = tf.placeholder(tf.float32, shape=[1,2], name = 'X')

Step 3) Define the dataset method

Next, we need to define the Dataset where we can populate the value of the
placeholder x. We need to use the method tf.data.Dataset.from_tensor_slices

dataset = tf.data.Dataset.from_tensor_slices(x)

Step 4) Create the pipeline

In step four, we need to initialize the pipeline where the data will flow. We need to
create an iterator with make_initializable_iterator. We name it iterator. Then we need to
call this iterator to feed the next batch of data, get_next. We name this step get_next.
Note that in our example, there is only one batch of data with only two values.

iterator = dataset.make_initializable_iterator()

get_next = iterator.get_next()

Step 5) Execute the operation


The last step is similar to the previous example. We initiate a session, and we run the
operation iterator. We feed the feed_dict with the value generated by numpy. These two
value will populate the placeholder x. Then we run get_next to print the result.

with tf.Session() as sess:

# feed the placeholder with data

sess.run(iterator.initializer, feed_dict={ x: x_input })

print(sess.run(get_next)) # output [ 0.52374458 0.71968478]

[0.8835775 0.23766978]

Summary

● TensorFlow meaning: TensorFlow is the most famous deep learning library these
recent years. A practitioner using TensorFlow can build any deep learning
structure, like CNN, RNN or simple artificial neural network.

● TensorFlow is mostly used by academics, startups, and large companies. Google


uses TensorFlow in almost all Google daily products including Gmail, Photo and
Google Search Engine.

● Google Brain team’s developed TensorFlow to fill the gap between researchers
and products developers. In 2015, they made TensorFlow public; it is rapidly
growing in popularity. Nowadays, TensorFlow is the deep learning library with the
most repositories on GitHub.
● Practitioners use Tensorflow because it is easy to deploy at scale. It is built to
work in the cloud or on mobile devices like iOs and Android.

Tensorflow works in a session. Each session is defined by a graph with different


computations. A simple example can be to multiply to number. In Tensorflow, three
steps are required:

1. Define the variable

X_1 = tf.placeholder(tf.float32, name = "X_1")

X_2 = tf.placeholder(tf.float32, name = "X_2")

2. Define the computation

multiply = tf.multiply(X_1, X_2, name = "multiply")

3. Execute the operation

with tf.Session() as session:

result = session.run(multiply, feed_dict={X_1:[1,2,3], X_2:


[4,5,6]})

print(result)

One common practice in Tensorflow is to create a pipeline to load the data. If you follow
these five steps, you’ll be able to load data to TensorFLow:
1. Create the data

import numpy as np

x_input = np.random.sample((1,2))

print(x_input)

2. Create the placeholder

x = tf.placeholder(tf.float32, shape=[1,2], name = 'X')

3. Define the dataset method

dataset = tf.data.Dataset.from_tensor_slices(x)

4. Create the pipeline

iterator = dataset.make_initializable_iterator() get_next =


iterator.get_next()

5. Execute the program

with tf.Session() as sess:

sess.run(iterator.initializer, feed_dict={ x: x_input })

print(sess.run(get_next))

You might also like