9.4.1 Deep Learning and AI Development Framework Lab Guide-1
9.4.1 Deep Learning and AI Development Framework Lab Guide-1
Deep Learning
Lab Guide
Student Version
1
Deep Learning Lab Guide Page 1
Description
This guide introduces the following five exercises:
⚫ Exercise 1: MindSpore basics, which describes the basic syntax and common modules
of MindSpore.
⚫ Exercise 2: Handwritten character recognition, in which the MindSpore framework is
used to recognize handwritten characters.
⚫ Exercise 3: MobileNetV2 image classification, which mainly introduces the
classification of flower images using the lightweight network MobileNetV2.
⚫ Exercise 4: ResNet-50 image classification exercise, which mainly introduces the
classification of flower images using the ResNet-50 model.
⚫ Exercise 5: TextCNN sentiment analysis, which mainly introduces sentiment analysis
of statements using the TextCNN model.
1
Deep Learning Lab Guide Page 2
Contents
2
Deep Learning Lab Guide Page 3
1 MindSpore Basics
1.1 Introduction
1.1.1 About This Lab
This exercise introduces the tensor data structure of MindSpore. By performing a series of
operations on tensors, you can understand the basic syntax of MindSpore.
1.1.2 Objectives
⚫ Master the method of creating tensors.
⚫ Master the attributes and methods of tensors.
1.2 Procedure
1.2.1 Introduction to Tensors
Tensor is a basic data structure in MindSpore network computing. For details about data
types in tensors, see the dtype description on the MindSpore official website.
Tensors of different dimensions represent different data. For example, a 0-dimensional
tensor represents a scalar, a 1-dimensional tensor represents a vector, a 2-dimensional
tensor represents a matrix, and a 3-dimensional tensor may represent the three channels
of RGB images.
MindSpore tensors support different data types, including int8, int16, int32, int64, uint8,
uint16, uint32, uint64, float16, float32, float64 and bool_, which correspond to the data
types of NumPy.
In the computation process of MindSpore, the int data type in Python is converted into
the defined int64 type, and the float data type is converted into the defined float32 type.
3
Deep Learning Lab Guide Page 4
The data type can be specified during tensor initialization. However, if the data type is
not specified, the initial values int, float, and bool generate 0-dimensional tensors with
mindspore.int32, mindspore.float32 and mindspore.bool_ data types, respectively. The
data types of the 1-dimensional tensors generated by the initial values tuple and list
correspond to the data types of tensors stored in the tuple and list. If multiple types of
data are contained, the MindSpore data type corresponding to the data type with the
highest priority is selected (Boolean < int < float). If the initial value is Tensor, the data
type is tensor. If the initial value is NumPy.array, the generated tensor data type
corresponds to NumPy.array.
Code:
# Import MindSpore.
import mindspore
# The cell outputs multiple lines at the same time.
from IPython.core.interactiveshell import InteractiveShell
InteractiveShell.ast_node_interactivity = "all"
import numpy as np
from mindspore import Tensor
from mindspore import dtype
# Use an array to create a tensor.
x = Tensor(np.array([[1, 2], [3, 4]]), dtype.int32)
x
Output:
Code:
Output:
Code:
4
Deep Learning Lab Guide Page 5
m = Tensor(True, dtype.bool_)
m
Output:
Code:
Output
Code:
Output:
Code:
Output:
Code:
5
Deep Learning Lab Guide Page 6
shape = (2, 2)
ones = ops.Ones()
output = ones(shape,dtype.float32)
print(output)
zeros = ops.Zeros()
output = zeros(shape, dtype.float32)
print(output)
Output:
[[1. 1.]
[1. 1.]]
[[0. 0.]
[0. 0.]]
# Shape
# Data type
# Dimension
# Size
Output:
(2, 2)
mindspore.int32
2
4
6
Deep Learning Lab Guide Page 7
Output:
Code:
Output:
Code:
Output:
[[[0. 1.]
[2. 3.]]
[[4. 5.]
[6. 7.]]]
7
Deep Learning Lab Guide Page 8
Code:
Output:
import os
import mindspore.dataset as ds
import matplotlib.pyplot as plt
Output:
8
Deep Learning Lab Guide Page 9
For datasets that cannot be directly loaded by MindSpore, you can build a custom
dataset class and use the GeneratorDataset API to customize data loading.
Code:
import numpy as np
np.random.seed(58)
class DatasetGenerator:
# When a dataset object is instantiated, the __init__ function is called. You can perform operations
such as data initialization.
def __init__(self):
self.data = np.random.sample((5, 2))
self.label = np.random.sample((5, 1))
# Define the __getitem__ function of the dataset class to support random access and obtain and
#return data in the dataset based on the specified index value.
# Define the __len__ function of the dataset class and return the number of samples in the dataset.
# After the dataset class is defined, the GeneratorDataset API can be used to load and access dataset
Output:
The dataset APIs provided by MindSpore support data processing methods such as shuffle
and batch. You only need to call the corresponding function API to quickly process data.
In the following example, the datasets are shuffled, and two samples form a batch.
Code:
ds.config.set_seed(58)
# Shuffle the data sequence. buffer_size indicates the size of the shuffled buffer in the dataset.
9
Deep Learning Lab Guide Page 10
# Divide the dataset into batches. batch_size indicates the number of data records contained in each
Output:
Code:
DATA_DIR = './MNIST/train'
# Obtain six samples.
Output:
10
Deep Learning Lab Guide Page 11
Output:
import mindspore as ms
import mindspore.nn as nn
from mindspore import Tensor
import numpy as np
Output:
[[1. 1. 1.]
11
Deep Learning Lab Guide Page 12
[2. 2. 2.]]
[[3. 3. 3.]
[6. 6. 6.]]
Code:
print(conv2d(input_x).shape)
Output:
Code:
relu = nn.ReLU()
input_x = Tensor(np.array([-1, 2, -3, 2, -1]), ms.float16)
output = relu(input_x)
print(output)
Output:
[0. 2. 0. 2. 0.]
Code:
print(max_pool2d(input_x).shape)
Output:
Code:
flatten = nn.Flatten()
input_x = Tensor(np.ones([1, 16, 5, 5]), ms.float32)
output = flatten(input_x)
print(output.shape)
12
Deep Learning Lab Guide Page 13
Output:
(1, 400)
The Cell class of MindSpore is the base class for building all networks and the basic unit
of a network. When a neural network is required, you need to inherit the Cell class and
overwrite the __init__ and construct methods.
Code:
class LeNet5(nn.Cell):
"""
Lenet network structure
"""
# Instantiate the model and use the parameters_and_names method to view the model parameters.
Output:
13
Deep Learning Lab Guide Page 14
A loss function is used to validate the difference between the predicted and actual values
of a model. Here, the absolute error loss function L1Loss is used. mindspore.nn.loss also
provides many other loss functions, such as SoftmaxCrossEntropyWithLogits, MSELoss,
and SmoothL1Loss.
The output value and target value are provided to compute the loss value. The method is
as follows:
Code:
import numpy as np
import mindspore.nn as nn
from mindspore import Tensor
import mindspore.dataset as ds
import mindspore as ms
loss = nn.L1Loss()
output_data = Tensor(np.array([[1, 2, 3], [2, 3, 4]]).astype(np.float32))
target_data = Tensor(np.array([[0, 2, 5], [3, 1, 1]]).astype(np.float32))
print(loss(output_data, target_data))
Output:
1.5
Common deep learning optimization algorithms include SGD, Adam, Ftrl, lazyadam,
Momentum, RMSprop, Lars, Proximal_ada_grad, and lamb.
Momentum optimizer: mindspore.nn.Momentum
Code:
14
Deep Learning Lab Guide Page 15
Code:
import mindspore.dataset.transforms.c_transforms as C
import mindspore.dataset.vision.c_transforms as CV
from mindspore.train.callback import LossMonitor
DATA_DIR = './MNIST/train'
mnist_dataset = ds.MnistDataset(DATA_DIR)
resize_op = CV.Resize((28,28))
rescale_op = CV.Rescale(1/255,0)
hwc2chw_op = CV.HWC2CHW()
Code:
# Test set
DATA_DIR = './forward_mnist/MNIST/test'
dataset = ds.MnistDataset(DATA_DIR)
resize_op = CV.Resize((28,28))
rescale_op = CV.Rescale(1/255,0)
hwc2chw_op = CV.HWC2CHW()
15
Deep Learning Lab Guide Page 16
import mindspore as ms
# net indicates a defined network model, which is used before or after training.
ms.save_checkpoint(net, "./MyNet.ckpt") # net indicates the training network, and ./MyNet.ckpt
indicates the path for saving the network model.
Output:
2. The other one is to save the interface during network model training. MindSpore
automatically saves the number of epochs and number of steps set during training. That
is, the intermediate weight parameters generated during the model training process are
also saved to facilitate network fine-tuning and stop training.
Code:
To compute the input derivative, you need to define a network requiring a derivative. The
following uses a network f(x,y)=z∗x∗y formed by the MatMul operator as an example.
Code:
import numpy as np
import mindspore.nn as nn
import mindspore.ops as ops
from mindspore import Tensor
16
Deep Learning Lab Guide Page 17
Output:
Output:
17
Deep Learning Lab Guide Page 18
1.3 Question
When the following code is used to create two tensors, t1 and t2, can t1 be created
properly? Check whether the two tensors have the same outputs. If not, what is the
difference?
18
Deep Learning Lab Guide Page 19
19