Practical 1
Practical 1
Implementing advanced deep learning algorithms such as convolutional neural networks (CNNs) or
recurrent neural networks (RNNs) using Python libraries like TensorFlow or PyTorch.
step-by-step executable code for implementing Convolutional Neural Networks (CNNs) and
Recurrent Neural Networks (RNNs) using TensorFlow and PyTorch in Python. The examples use the
CIFAR-10 dataset (for CNN) and IMDB dataset (for RNN).
1. Install Dependencies:
bash
Copy code
python
Copy code
import tensorflow as tf
layers.MaxPooling2D((2, 2)),
layers.MaxPooling2D((2, 2)),
layers.Flatten(),
layers.Dense(64, activation='relu'),
layers.Dense(10, activation='softmax')
])
plt.figure(figsize=(12, 4))
plt.subplot(1, 2, 1)
plt.xlabel('Epoch')
plt.ylabel('Accuracy')
plt.legend()
plt.title('Model Accuracy')
plt.subplot(1, 2, 2)
plt.xlabel('Epoch')
plt.ylabel('Loss')
plt.legend()
plt.title('Model Loss')
plt.show()
1. Install Dependencies:
bash
Copy code
python
Copy code
import torch
import torch.nn as nn
tokenizer = get_tokenizer('basic_english')
def yield_tokens(data_iter):
yield tokenizer(text)
vocab.set_default_index(vocab["<unk>"])
def collate_batch(batch):
text_list.append(torch.tensor(vocab(tokenizer(text)), dtype=torch.int64))
class RNNModel(nn.Module):
super(RNNModel, self).__init__()
return self.fc(hidden.squeeze(0))
criterion = nn.CrossEntropyLoss()
model.train()
total_loss = 0
optimizer.zero_grad()
outputs = model(texts)
loss.backward()
optimizer.step()
total_loss += loss.item()
correct = 0
total = 0
model.eval()
with torch.no_grad():
outputs = model(texts)
predictions = outputs.argmax(1)
correct += (predictions == labels).sum().item()
total += labels.size(0)
2. Save each code block into separate .py files (cnn_tensorflow.py and rnn_pytorch.py).
bash
Copy code
python cnn_tensorflow.py
python rnn_pytorch.py