Deep Learning With PyTorch 1
Deep Learning With PyTorch 1
Deep Learning with PyTorch Datasets and dataloaders The training loop
# Create a dataset from a pandas DataFrame with TensorDataset()
# Set model to training mode
X = df[feature_columns].values
model.train()
loss_criterion = nn.MSELoss()
dataloader = DataLoader(dataset, batch_size=n, shuffle=True) # Set the gradients to zero with .zero_grad()
optimizer.zero_grad()
predictions = model(data)
# Calculate loss
PyTorch is one of the most popular deep learning frameworks, with a syntax similar to NumPy F.one_hot(torch.tensor([0, 1, 2]), num_classes=3) # Returns tensor of 0s and 1s # Calculate gradients using backprogagation
In the context of PyTorch, you can think of a Tensor as a NumPy array that can be run on a CPU or a GPU, and has a loss.backward()
method for automatic differentiation (needed for backpropagation) # Update the model parameters
TorchText, TorchVision, and TorchAudio are Python packages that provide PyTorch with functionality for text, image,
Sequential model architecture optimizer.step()
A neural network consists of neurons that are arranged into layers. Input values are passed to the first layer of neural # Create a linear layer with m inputs, n outputs with Linear()
The evaluation loop
networks. Each neuron has two properties: a weight and a bias. The output of a neuron in a neural network is a lnr = nn.Linear(m, n)
weighted sum of its inputs, plus the bias. The output is passed on to any connected neurons in the next layer, and this
continues until the final layer of the network is reached # Get weight of layer with .weight
# Set model to evaluation mode
lnr.weight
model.eval()
An activation function is a transformation of the output from a neuron, and is used to introduce non-linearity into the # Create accuracy metric with Accuracy()
Backpropagation is an algorithm used to train neural networks by iteratively adjusting the weights and biases of each lnr.bias
Saturation is when the output from a neuron reaches a maximum or minimum value beyond which it cannot change.
This can reduce learning performance, and an activation function such as ReLU may be needed to avoid the nn.Sigmoid()
The loss function quantifies the difference between the predicted output of a model and the actual target output nn.Softmax(dim=-1)
The optimizer is an algorithm to adjust the parameters (neuron weights and biases) of a neural network during the accuracy = metric.compute()
# Create a rectified linear unit activation layer to avoid saturation with ReLU()
print(f"Accuracy on all data: {accuracy}")
The learning rate controls the step size of the optimizer. If the learning rate is too low the optimization will take too metric.reset()
long. If it is too high, the optimizer will not effectively minimize the loss function leading to poor predictions # Create a leaky rectified linear unit activation layer to avoid saturation with LeakyReLU()
nn.LeakyReLU(negative_slope=0.05)
Momentum controls the inertia of the optimizer. If momentum is too low, the optimizer can get stuck at a local
minimum and give the wrong answer. If it is too high, the optimizer can fail to converge and not give an answer
# Create a dropout layer to regularize and prevent overfitting with Dropout()
Fine-tuning is a type of transfer learning where early layers are frozen, and only the layers close to the output are # Create a sequential model from layers
torch.save(layer, 'layer.pth')
nn.Linear(n_features, i),
# Load a layer of a model from a file with load()
Accuracy is a metric to determine how well a model fits a dataset. It quantifies the proportion of correctly predicted new_layer = torch.load('layer.pth')
outcomes (either classifications or predictions) compared to the total number of data points in the dataset. nn.Linear(i, j), # Input size must match output from previous layer
nn.Linear(j, n_classes),
param.requires_grad = False
prediction = model(input_data).double()
import torch.nn.functional as F
actual = torch.tensor(target_values).double()
import torchmetrics
l1_loss = nn.SmoothL1Loss()(prediction, actual) # Returns tensor(x)
www.DataCamp.com
# Calculate binary cross-entropy loss for binary classification with BCELoss()
Working with tensors # Calculate cross-entropy loss for multi-class classification with CrossEntropyLoss()
tnsr_zrs = torch.zeros(2, 3)
tnsr_rndm = torch.rand(size=(3, 4)) # Tensor has 3 rows, 4 columns # Update neuron parameters with .step()
optimizer.step()