0% found this document useful (0 votes)
8 views4 pages

BasicTensorboardFromPrinciples and Labs for Deep Learning

The document discusses the use of TensorBoard, a TensorFlow toolkit for visualizing and measuring machine learning experiments, including tracking loss and accuracy during training. It outlines how to implement TensorBoard in Keras and provides methods to open log files for visualization. Additionally, it introduces the concept of overfitting in neural networks, highlighting its effects on training and validation performance.

Uploaded by

Akshay Hebbar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views4 pages

BasicTensorboardFromPrinciples and Labs for Deep Learning

The document discusses the use of TensorBoard, a TensorFlow toolkit for visualizing and measuring machine learning experiments, including tracking loss and accuracy during training. It outlines how to implement TensorBoard in Keras and provides methods to open log files for visualization. Additionally, it introduces the concept of overfitting in neural networks, highlighting its effects on training and validation performance.

Uploaded by

Akshay Hebbar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 4

2.

4 Introduction to TensorBoard 43
4. The average percentage error on test data
Predict house price on test data and calculate the average percentage error.
#Load model
model.load_weights('lab2-logs/models/Best-model-1.h5')
#take out the house price
y_test = np.array(test_data['price'])
# data norm alization
test_data = (test_data - mean) / std
# Save the input data in Numpy format
x_test = np.array(test_data.drop('price', axis='columns'))
# Predict on test data
y_pred = model.predict(x_test)
# Convert the prediction results back
y_pred = np.reshape(y_pred * std['price'] + mean['price'], y_test.shape)
# Calculate the mean percentage error
percentage_error = np.mean(np.abs(y_test - y_pred)) / np.mean(y_test) * 100
# Display percentage error
print(" Model_1 Percentage Error: { :.2f } %" .format(percentage_error))
Result: Model_1 Percentage Error: 14.08%

2.4 Introduction to TensorBoard


TensorBoard is a TensorFlow toolkit that provides the visualization and measurement needed for machine learning
experimentation such as tracking loss values and the accuracy of the model during training, visualizing the model
graph, viewing histograms, and so on. There are two common ways to use TensorBoard: (1) adding “tf.keras.call-
backs.TensorBoard” function to create and store logs when training with Model.fit of Keras, and (2) using “tf.
summary” API to log information when training with “tf.GradientTape()” or other methods. The graphic interface
of the TensorBoard is shown in Fig. 2.14. As shown, there are four main tools in TensorBoard: the Scalars dashboard,
Graphs dashboard, Distributions dashboard, and Histograms dashboard.

FIG. 2.14 The graphic interface of TensorBoard.


44 2. Neural networks

▪ The Scalars dashboard: helps to track scalar values such as learning rate, loss, accuracy, and so on during training
neural networks
▪ Graphs dashboard: helps to visualize the models built by TensorFlow
▪ The Distributions and Histograms dashboards: help to display the distribution of the tensor. They are widely used
for visualizing weights and biases of the TensorFlow models
The advantages and disadvantages of TensorBoard include:
▪ Advantages: The information during training the model such as changes in the loss, accuracy, the histograms of
weights, biases, and so on, can be tracked and viewed in real time, without having to wait until the training is
completed.
▪ Disadvantages: The information will be written to the log file many times during training the model. If a lot of
information is recorded, training time is increased.
When training the house price-prediction model, the TensorBoard callback function, namely, “keras.callbacks.
TensorBoard,” has been added for creating and storing the log. There are two ways to open the log file. The first
way is to directly open the log file on Jupyter Notebook, and the second way is to run TensorBoard through a terminal
and then observe results through the browser.
▪ Open log file with Jupyter Notebook (results are shown in Fig. 2.15)

FIG. 2.15 Visualizing metrics (loss and accuracy) on TensorBoard (1).


2.4 Introduction to TensorBoard 45

# Loading TensorBoard directly on the jupyter notebook


%load_ext tensorboard
# Run TensorBoard and specify the log file folder as lab2-logs
%tensorboard --logdir lab2-logs
Result:
▪ Open log file with Command line
- Please go to the location where the TensorBoard log file is stored and run the command below. Note that the result
is observed through URL: http://localhost:6006/, as shown in Fig. 2.16.

tensorboard --logdir lab2-logs

- The port number can be specified for displaying the result; following the command below, the result can be
observed through URL: http://localhost:9527/, as shown in Fig. 2.16.

tensorboard --port 9527 --logdir lab2-logs

FIG. 2.16 Visualizing metrics (loss and accuracy) on TensorBoard (2).

In addition to metrics such as loss and accuracy, the model graph is also visualized, as shown in Fig. 2.17. We discuss
the other visualization functions of TensorBoard such as Images, Text, Audio, and so on in Chapter 7.
46 2. Neural networks

FIG. 2.17 Visualizing the model graph on TensorBoard.

2.5 Experiment 2: Overfitting problem

2.5.1 Introduction to overfitting


Overfitting refers to the network model that obtained very good performance on the training data but that had poor
performance on the validation data. The training loss curve is usually used to observe whether or not there is an over-
fitting problem. Fig. 2.18 presents an overfitting phenomenon, where the loss value of the training data (training error)
continues to decrease after a period of training, while the loss value of the verification data (validation error) gradually
increases.

Error

Underfitting zone O verfitting zone

Validation error
optimism
Training error

optimal C apacity
C apacity
FIG. 2.18 Overfitting phenomenon.

The training result of the house price-prediction model in Section 2.3 is shown in Fig. 2.19, and the overfitting
phenomenon can also be observed from the loss curve graph.

You might also like