Tensorflow
Tensorflow
(https://www.geeksforgeeks.org/introduction-tensor-tensorflow/?
ref=lbp)
...
session.run(...)
...
Summary
heart of any machine learning algorithm. It is with the help of core concepts of
Vector
An array of numbers, which is either continuous or discrete, is defined as a vector.
Machine learning algorithms deal with fixed length vectors for better output
generation.
crucial role.
Scalar
Scalar can be defined as one-dimensional vector. Scalars are those, which include
only magnitude and no direction. With scalars, we are only concerned with the
magnitude.
Matrix
Matrix can be defined as multi-dimensional arrays, which are arranged in the
format of rows and columns. The size of matrix is defined by row length and
column length. Following figure shows the representation of any specified matrix.
Consider the matrix with “m” rows and “n” columns as mentioned above, the matrix
representation will be specified as “m*n matrix” which defined the length of matrix
as well.
Mathematical Computations
In this section, we will learn about the different Mathematical Computations in
TensorFlow.
Addition of matrices
Addition of two or more matrices is possible if the matrices are of the same
dimension. The addition implies addition of each element as per the given position.
Subtraction of matrices
The subtraction of matrices operates in similar fashion like the addition of two
matrices. The user can subtract two matrices provided the dimensions are equal.
Multiplication of matrices
For two matrices A m*n and B p*q to be multipliable, n should be
C m*q
Transpose of matrix
The transpose of a matrix A, m*n is generally represented by AT (transpose) n*m
represent the connecting edges in any flow diagram called the Data Flow Graph.
Rank
Unit of dimensionality described within the tensor is called rank. It identifies the
Shape
The number of rows and columns together define the shape of Tensor.
Type
Type describes the data type assigned to Tensor’s elements.
Tensor −
in brief below −
Declaration
index of 0; to print the values through index, all you need to do is mention the index
number.
import tensorflow as tf
import numpy as np
print (matrix1)
print (matrix2)
matrix1 = tf.constant(matrix1)
matrix2 = tf.constant(matrix2)
matrix_product = tf.matmul(matrix1, matrix2)
matrix_sum = tf.add(matrix1,matrix2)
matrix_3 = np.array([(2,7,2),(1,4,2),(9,0,2)],dtype = 'float32')
print (matrix_3)
matrix_det = tf.matrix_determinant(matrix_3)
with tf.Session() as sess:
result1 = sess.run(matrix_product)
result2 = sess.run(matrix_sum)
result3 = sess.run(matrix_det)
print (result1)
print (result2)
print (result3)
Output
Explanation
We have created multidimensional arrays in the above source code. Now, it is
important to understand that we created graph and sessions, which manage the
Tensors and generate the appropriate output. With the help of graph, we have the
output specifying the mathematical calculations between Tensors.
# Reset graph
ops.reset_default_graph()
sess = tf.Session()
# Create variable
my_var = tf.Variable(tf.zeros([1,20]))
merged = tf.summary.merge_all()
writer = tf.summary.FileWriter("/tmp/variable_logs",
graph=sess.graph)
# Initialize operation
initialize_op = tf.global_variables_initializer()
# Run initialization of variable
sess.run(initialize_op)
tensorboard --logdir=/tmp
TensorFlow Components
Tensor
Tensorflow’s name is directly derived from its core framework: Tensor. In Tensorflow,
all the computations involve tensors. A tensor is a vector or matrix of n-dimensions that
represents all types of data. All values in a tensor hold identical data type with a known
(or partially known) shape. The shape of the data is the dimensionality of the matrix or
array.
A tensor can be originated from the input data or the result of a computation. In
TensorFlow, all the operations are conducted inside a graph. The graph is a set of
computation that takes place successively. Each operation is called an op node and
are connected to each other.
The graph outlines the ops and connections between the nodes. However, it does not
display the values. The edge of the nodes is the tensor, i.e., a way to populate the
operation with data.
Graphs (https://www.tensorflow.org/guide/intro_to_graphs#setup)
TensorFlow makes use of a graph framework. The graph gathers and describes all the
series computations done during the training. The graph has lots of advantages:
● It was done to run on multiple CPUs or GPUs and even mobile operating system
● The portability of the graph allows to preserve the computations for immediate or
later use. The graph can be saved to be executed in the future.
● All the computations in the graph are done by connecting tensors together
○ A tensor has a node and an edge. The node carries the mathematical
operation and produces an endpoints outputs. The edges the edges
explain the input/output relationships between nodes.
Graph execution means that tensor computations are executed as a TensorFlow graph,
sometimes referred to as a tf.Graph or simply a "graph."
Graphs are data structures that contain a set of tf.Operation objects, which represent units of
computation; and tf.Tensor objects, which represent the units of data that flow between
operations. They are defined in a tf.Graph context. Since these graphs are data structures, they
can be saved, run, and restored all without the original Python code.
This is what a TensorFlow graph representing a two-layer neural network looks like when
visualized in TensorBoard:
The benefits of graphs
With a graph, you have a great deal of flexibility. You can use your TensorFlow graph in
environments that don't have a Python interpreter, like mobile applications, embedded devices,
and backend servers. TensorFlow uses graphs as the format for saved models when it exports
them from Python.
Graphs are also easily optimized, allowing the compiler to do transformations like:
Statically infer the value of tensors by folding constant nodes in your computation ("constant
folding").
Separate sub-parts of a computation that are independent and split them between threads or
devices.
Simplify arithmetic operations by eliminating common subexpressions.
There is an entire optimization system, Grappler, to perform this and other speedups.
In short, graphs are extremely useful and let your TensorFlow run fast, run in parallel, and run
efficiently on multiple devices.
However, you still want to define your machine learning models (or other computations) in
Python for convenience, and then automatically construct graphs when you need them.
https://www.tensorflow.org/guide/intro_to_graphs#setup
Setup
import tensorflow as tf
import timeit
from datetime import datetime
You create and run a graph in TensorFlow by using tf.function, either as a direct call or as a
decorator. tf.function takes a regular function as input and returns a Function. A Function is a
Python callable that builds TensorFlow graphs from the Python function. You use a Function in
the same way as its Python equivalent.
Tensorflow attracts the largest popularity on GitHub compare to the other deep learning
framework.
TensorFlow Algorithms
● Classification:tf.estimator.LinearClassifier
import tensorflow as tf
In the first two line of code, we have imported tensorflow as tf. With Python, it is a
common practice to use a short name for a library. The advantage is to avoid to type the
full name of the library when we need to use it. For instance, we can import tensorflow
as tf, and call tf when we want to use a tensorflow function
Let’s practice the elementary workflow of Tensorflow with simple TensorFlow examples.
Let’s create a computational graph that multiplies two numbers together.
During the example, we will multiply X_1 and X_2 together. Tensorflow will create a
node to connect the operation. In our example, it is called multiply. When the graph is
determined, Tensorflow computational engines will multiply together X_1 and X_2.
TensorFlow Example
Finally, we will run a TensorFlow session that will run the computational graph with the
values of X_1 and X_2 and print the result of the multiplication.
Let’s define the X_1 and X_2 input nodes. When we create a node in Tensorflow, we
have to choose what kind of node to create. The X1 and X2 nodes will be a placeholder
node. The placeholder assigns a new value each time we make a calculation. We will
create them as a TF dot placeholder node.
When we create a placeholder node, we have to pass in the data type will be adding
numbers here so we can use a floating-point data type, let’s use tf.float32. We also
need to give this node a name. This name will show up when we look at the graphical
visualizations of our model. Let’s name this node X_1 by passing in a parameter called
name with a value of X_1 and now let’s define X_2 the same way. X_2.
Now we can define the node that does the multiplication operation. In Tensorflow we
can do that by creating a tf.multiply node.
We will pass in the X_1 and X_2 nodes to the multiplication node. It tells tensorflow to
link those nodes in the computational graph, so we are asking it to pull the values from x
and y and multiply the result. Let’s also give the multiplication node the name multiply. It
is the entire definition for our simple computational graph.
When the addition operation runs, it is going to see that it needs to grab the values of
the X_1 and X_2 nodes, so we also need to feed in values for X_1 and X_2. We can do
that by supplying a parameter called feed_dict. We pass the value 1,2,3 for X_1 and
4,5,6 for X_2.
We print the results with print(result). We should see 4, 10 and 18 for 1×4, 2×5 and 3×6
print(result)
[ 4. 10. 18.]
Options to Load Data into TensorFlow
The first step before training a machine learning algorithm is to load the data. There is
two commons way to load data:
1. Load data into memory: It is the simplest method. You load all your data into
memory as a single array. You can write a Python code. This lines of code are
unrelated to Tensorflow.
2. Tensorflow data pipeline: Tensorflow has built-in API that helps you to load the
data, perform the operation and feed the machine learning algorithm easily. This
method works very well especially when you have a large dataset. For instance, image
records are known to be enormous and do not fit into memory. The data pipeline
manages the memory by itself
If your dataset is not too big, i.e., less than 10 gigabytes, you can use the first method.
The data can fit into the memory. You can use a famous library called Pandas to import
CSV files. You will learn more about pandas in the next tutorial.
The second method works best if you have a large dataset. For instance, if you have a
dataset of 50 gigabytes, and your computer has only 16 gigabytes of memory then the
machine will crash.
In this situation, you need to build a Tensorflow pipeline. The pipeline will load the data
in batch, or small chunk. Each batch will be pushed to the pipeline and be ready for the
training. Building a pipeline is an excellent solution because it allows you to use parallel
computing. It means Tensorflow will train the model across multiple CPUs. It fosters the
computation and permits for training powerful neural network.
You will see in the next tutorials on how to build a significant pipeline to feed your neural
network.
In a nutshell, if you have a small dataset, you can load the data in memory with Pandas
library.
If you have a large dataset and you want to make use of multiple CPUs, then you will be
more comfortable to work with Tensorflow pipeline.
In the example before, we manually added three values for X_1 and X_2. Now, we will
see how to load data to Tensorflow:
First of all, let’s use numpy library to generate two random values.
import numpy as np
x_input = np.random.sample((1,2))
print(x_input)
[[0.8835775 0.23766977]]
# using a placeholder
Next, we need to define the Dataset where we can populate the value of the
placeholder x. We need to use the method tf.data.Dataset.from_tensor_slices
dataset = tf.data.Dataset.from_tensor_slices(x)
In step four, we need to initialize the pipeline where the data will flow. We need to
create an iterator with make_initializable_iterator. We name it iterator. Then we need to
call this iterator to feed the next batch of data, get_next. We name this step get_next.
Note that in our example, there is only one batch of data with only two values.
iterator = dataset.make_initializable_iterator()
get_next = iterator.get_next()
[0.8835775 0.23766978]
Summary
● TensorFlow meaning: TensorFlow is the most famous deep learning library these
recent years. A practitioner using TensorFlow can build any deep learning
structure, like CNN, RNN or simple artificial neural network.
● Google Brain team’s developed TensorFlow to fill the gap between researchers
and products developers. In 2015, they made TensorFlow public; it is rapidly
growing in popularity. Nowadays, TensorFlow is the deep learning library with the
most repositories on GitHub.
● Practitioners use Tensorflow because it is easy to deploy at scale. It is built to
work in the cloud or on mobile devices like iOs and Android.
print(result)
One common practice in Tensorflow is to create a pipeline to load the data. If you follow
these five steps, you’ll be able to load data to TensorFLow:
1. Create the data
import numpy as np
x_input = np.random.sample((1,2))
print(x_input)
dataset = tf.data.Dataset.from_tensor_slices(x)
print(sess.run(get_next))