24mcs1025 Ex2 Part C Wine Dataset
24mcs1025 Ex2 Part C Wine Dataset
**
MCSE603P: Deep Learning Lab
EXERCISE 2 Part C: Perceptron Implementation using Keras/Tensorflow
Submitted By:
Keerthana R (24MCS1025)
M.Tech CSE
SCOPE/VIT Chennai
Submitted To:
Dr.Rajalakshmi R
Associate Professor
SCOPE/VIT Chennai
[ ]:
Importing necessary libraries for building and evaluating a machine learning model, including Ten-
sorFlow/Keras for neural networks, Matplotlib for plotting, and Scikit-learn for handling the Wine
dataset and preprocessing. It loads the Wine dataset, splits it into training and testing sets, and
normalizes the features using StandardScaler for better model performance.
[2]: import numpy as np
import tensorflow as tf
from tensorflow import keras
import matplotlib.pyplot as plt
from sklearn.datasets import load_wine
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
Loading the Wine dataset into the variable data, with X containing the feature matrix (13 chemical
attributes) and y containing the target vector (class labels representing different wine types)
1
Spliting the Wine dataset into training and testing sets, with 80% of the data used for training
(X_train, y_train) and 20% reserved for testing (X_test, y_test). The random_state=42 ensures
reproducibility of the split.
[4]: X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2,␣
↪random_state=42)
Using StandardScaler to standardize the training and test feature matrices (X_train and X_test),
ensuring that each feature has a mean of 0 and a standard deviation of 1. This normalization helps
improve the performance of the machine learning model.
[5]: scaler = StandardScaler()
X_train = scaler.fit_transform(X_train)
X_test = scaler.transform(X_test)
Defining a neural network model using Keras, with one hidden layer containing 64 neurons and
ReLU activation, and an output layer with 3 neurons (corresponding to the 3 classes in the Wine
dataset) using softmax activation for multi-class classification
/usr/local/lib/python3.10/dist-packages/keras/src/layers/core/dense.py:87:
UserWarning: Do not pass an `input_shape`/`input_dim` argument to a layer. When
using Sequential models, prefer using an `Input(shape)` object as the first
layer in the model instead.
super().__init__(activity_regularizer=activity_regularizer, **kwargs)
Compiling the neural network model by specifying the Adam optimizer, sparse categorical cross-
entropy loss function (suitable for multi-class classification with integer labels), and accuracy as
the evaluation metric
[7]: model.compile(
optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy']
)
Training the neural network model on the training data (X_train, y_train) for 10 epochs, where the
model adjusts its weights based on the input data and the corresponding target labels to minimize
the loss function.
[8]: model.fit(X_train, y_train, epochs=10)
Epoch 1/10
5/5 �������������������� 2s 12ms/step -
accuracy: 0.1693 - loss: 1.4402
Epoch 2/10
2
5/5 �������������������� 0s 4ms/step -
accuracy: 0.2779 - loss: 1.2710
Epoch 3/10
5/5 �������������������� 0s 6ms/step -
accuracy: 0.4658 - loss: 1.1051
Epoch 4/10
5/5 �������������������� 0s 6ms/step -
accuracy: 0.5516 - loss: 0.9859
Epoch 5/10
5/5 �������������������� 0s 4ms/step -
accuracy: 0.6312 - loss: 0.8493
Epoch 6/10
5/5 �������������������� 0s 5ms/step -
accuracy: 0.7025 - loss: 0.7698
Epoch 7/10
5/5 �������������������� 0s 4ms/step -
accuracy: 0.8167 - loss: 0.6478
Epoch 8/10
5/5 �������������������� 0s 8ms/step -
accuracy: 0.8319 - loss: 0.6082
Epoch 9/10
5/5 �������������������� 0s 6ms/step -
accuracy: 0.8353 - loss: 0.5646
Epoch 10/10
5/5 �������������������� 0s 8ms/step -
accuracy: 0.8978 - loss: 0.4740
Evaluating the trained model on the test data (X_test, y_test), returning the loss and accuracy
metrics to assess the model’s performance on unseen data.
[9]: model.evaluate(X_test, y_test)
Generating predictions (y_pred) for the test data (X_test) using the trained model, and then prints
the predicted class labels by applying np.argmax() to convert the softmax output probabilities into
the index of the highest probability for each sample, which corresponds to the predicted class.
[10]: y_pred = model.predict(X_test)
print(np.argmax(y_pred, axis=1))