AI Lab 8
AI Lab 8
BSCS 6-A
Code:
Explanation:
Hidden Layers:
Activation Function:
ReLU: Fast and effective for hidden layers to handle non-linear relationships.
The final activation function is chosen based on the task (e.g., binary or multi-class
classification).
Convergence Settings:
Code:
import tensorflow as tf
from tensorflow.keras.models import Sequential
model = Sequential([
Flatten(input_shape=(28, 28)),
Dense(128, activation='relu'),
Dense(10, activation='softmax')
])
model.compile(optimizer='adam',
loss='categorical_crossentropy',
metrics=['accuracy'])
epochs=5,
batch_size=32,
validation_split=0.1)
y_pred = model.predict(X_test)
print(classification_report(y_true_classes, y_pred_classes))
Output;
Explanation
1. Activation Functions:
o ReLU for hidden layers to handle non-linear relationships.
o Softmax for output layer in classification tasks.
2. Optimizers:
o Adam: Adaptive optimizer with momentum.
o SGD: Simple gradient descent for experimentation.
3. Preprocessing:
o Normalization ensures input values are in a standard range.
o One-hot encoding converts categorical labels to numerical vectors.
4. Evaluation:
o Use metrics like accuracy and loss to assess model performance.
Code:
import tensorflow as tf
model = Sequential([
])
if optimizer == 'sgd':
opt = SGD(learning_rate=learning_rate)
opt = Adam(learning_rate=learning_rate)
model.compile(optimizer=opt,
loss='categorical_crossentropy',
metrics=['accuracy'])
return model
# Create model with SGD optimizer
epochs=10,
batch_size=64,
validation_split=0.1,
verbose=1)
epochs=10,
batch_size=64,
validation_split=0.1,
verbose=1)
for lr in learning_rates:
# Evaluate performance
plt.legend()
plt.show()
plt.legend()
plt.show()
y_pred = model_adam.predict(X_test)
print(classification_report(y_true_classes, y_pred_classes))
OUTPUT