Tensorflow
Tensorflow
TensorFlow is an open-sourse software library for machine learning across a range of tasks. It is a symbolic math
library, and also used as a system for building and training neural networks to detect and decipher patterns and
correlations, analogous to human learning and reasoning. It is used for both research and production at Google
often replacing its closed-source predecessor, DistBelief. TensorFlow was developed by the Google Brain team for
internal Google use.
Initialize constants
Initialize variables
Initialize placeholder
Start your own session
Train algorithms
Implement an optimization problem
We will walk you through its different applications. You will start with an example, where we compute for you the
loss of one training example.
2
loss = (ŷ − y) (1)
In [2]: # Define y_hat constant. Set to 36.
y_hat = tf.constant(36, name='y_hat')
# Define y. Set to 39
y = tf.constant(39, name='y')
session.run(init)
WARNING:tensorflow:From /Library/Frameworks/Python.framework/Versions/3.
7/lib/python3.7/site-packages/tensorflow/python/framework/op_def_library.
py:263: colocate_with (from tensorflow.python.framework.ops) is deprecate
d and will be removed in a future version.
Instructions for updating:
Colocations handled automatically by placer.
9
When we created a variable for the loss, we simply defined the loss as a function of other quantities, but did not
evaluate its value. To evaluate it, we had to run init=tf.global_variables_initializer() . That
initialized the loss variable, and in the last line we were finally able to evaluate the value of loss and print its
value.
In [3]: a = tf.constant(2)
b = tf.constant(3)
c = tf.multiply(a,b)
print(c)
6
To summarize,
PlaceHolders
A computational graph can be parameterized to accept external inputs, known as placeholders. The values for
placeholders are provided when the graph is run in a session.
(i) (i) 2
loss = (ŷ − y ) (1)
∑
6.0
When you first defined y or y_hat you did not have to specify values for them. A placeholder is simply a
variable that you will assign data to only later, when running the session.
Here's what's happening: When you specify the operations needed for a computation, you are telling TensorFlow
how to construct a computation graph. The computation graph can have some placeholders whose values you
will specify only at run time. Finally, when you run the session, you are telling TensorFlow to execute the
computation graph.
Parameter Description
When you first defined y or y_hat you did not have to specify values for them. A placeholder is simply a
variable that you will assign data to only later, when running the session.
Variables
Variables are used to add trainable parameters to a graph. They are constructed with a type and initial value.
Variables are not initialized when you call tf.Variable. To initialize the variables of a TensorFlow graph, we have to
call global_variables_initializer:
The difference between tf.Variable and tf.placeholder consists in the time when the values are passed. If you use
tf.Variable, you have to provide an initial value when you declare it. With tf.placeholder you don't have to provide
an initial value.
The value can be specified at run time with the feed_dict argument inside Session.run
A placeholder is used for feeding external data into a Tensorflow computation, i.e. from outside of the graph!
If you are training a learning algorithm, a placeholder is used for feeding in your training data. This means that the
training data is not part of the computational graph. The placeholder behaves similar to the Python "input"
statement. On the other hand a TensorFlow variable behaves more or less like a Python variable!
In [6]: n=4
p=2
npX = np.array([1, 2, 3, 4, 4, 2, 4, 3])
npX = np.reshape(npX, newshape=(n,p), order='F')
print(npX)
print(npy)
[[1 4]
[2 2]
[3 4]
[4 3]]
[[5]
[4]
[7]
[7]]
deltas = tf.square(y_hat - y)
loss = tf.reduce_sum(deltas)
We are building a linear regression framework using variables and place holders. alpha and beta are two sets
of parameters that we are going to estimate. In this example, beta is a vector size 2. These two parameters are
defined as variable since they will be trained. The sum of the square of differences between actual and prediction
is the loss function. We are using gradient descent optimisation to minimise the loss function.
loss: 7.103631
loss: 0.015721
loss: 0.008392
loss: 0.004480
loss: 0.002391
loss: 0.001277
loss: 0.000681
loss: 0.000364
loss: 0.000194
loss: 0.000104
Now let us check estimated alpha and beta, and check how close the actuals and predictions are.
In [9]: print("alpha: ", est_alpha)
print("beta: ", est_beta)
y_hat = npX.dot(est_beta) + est_alpha
pd.DataFrame({"Actual":np.reshape(npy, (n,)), "Predicted":np.reshape(y_hat
, (n,))})
alpha: [0.0183324]
beta: [[0.998108 ]
[0.9960355]]
Out[9]:
Actual Predicted
0 5 5.000582
1 4 4.006619
2 7 6.996799
3 7 6.998871