DL-Experiments-1 To 5
DL-Experiments-1 To 5
1 AI & ML (AITK)
1. Introduction of Keras
History of Keras:
Keras was originally developed by François Chollet in March 2015 as a user-friendly API for building and training neural
networks. It aimed to simplify the development of deep learning models by providing a high-level interface that could run on top
of various backends such as TensorFlow, Theano, and Microsoft Cognitive Toolkit (CNTK). In 2017, Keras was integrated into
TensorFlow as tf.keras, making it the default high-level API for TensorFlow.
Why Keras?
• Keras abstracts many of the complexities involved in building and training neural networks. It provides:
• Layers and Models: Tools to create and stack layers to build models, such as Sequential or Functional API.
• Optimizers and Loss Functions: Predefined functions to compile models, making it easy to switch between different
optimization strategies and loss metrics.
• Training and Evaluation: Functions to fit models on data, evaluate performance, and make predictions.
• Pretrained Models: Access to various pretrained models for transfer learning and fine-tuning.
Sequential API: The Sequential API is a straightforward way to build models layer by layer. It is ideal for simple, linear stacks
of layers.
Example :
model = Sequential([
Dense(10, activation='softmax')
])
Functional API: The Functional API is more flexible and allows for complex architectures such as multi-input, multi-output
models, and shared layers.
Example:
inputs = Input(shape=(input_dim,))
x = Dense(64, activation='relu')(inputs)
Model Subclassing: This approach allows for the most flexibility, enabling users to define custom models by sub classing the
tf.keras.Model class.
Example:
class MyModel(Model):
def __init__(self):
super(MyModel, self).__init__()
x = self.dense1(inputs)
return self.dense2(x)
• Pre-built Layers and Models: Keras includes a wide range of layers (Dense, Conv2D, LSTM, etc.) and pre-trained
models (like VGG, ResNet, etc.) for easy experimentation and fine-tuning.
• Optimizers: Various optimizers like SGD, Adam, and RMSprop are available to optimize model training.
• Loss Functions: Common loss functions (e.g., mean squared error, categorical cross-entropy) are included to measure
model performance.
• Metrics: Metrics for evaluating model performance, such as accuracy and precision, are readily available.
• Callbacks: Tools for model monitoring and training control, including checkpoints, early stopping, and learning rate
adjustments.
• Research: Keras is widely used in research due to its ease of use, allowing researchers to quickly prototype and
experiment with neural network models.
• Industry: Many industry applications use Keras for tasks like image classification, natural language processing, and
recommendation systems due to its simplicity and integration with TensorFlow.
• Education: Keras is popular in educational settings for teaching deep learning concepts due to its intuitive API and
simplicity.
• To use Keras, you typically need to install TensorFlow, as Keras is included as part of TensorFlow.
• On cmd: pip install tensorflow
import numpy as np
model = Sequential([
Dense(64, activation='relu'),
Dense(1, activation='sigmoid')
])
model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy'])
When Keras?
1. Download Python:
o Go to the Python website.
o Download and install the latest version of Python. Ensure that the "Add
Python to PATH" option is checked during installation.
2. Verify Python Installation:
o Open a terminal (Command Prompt, PowerShell, or Terminal on macOS).
o Type python --version to verify the installation.
3. Install Pip:
o Pip is the package installer for Python, and it should be included with Python.
Verify it by typing pip --version in the terminal.
4. Create folder on the desktop and name it DL_Lab_your roll number last 4 digits.
1. Open VS Code:
o Launch Visual Studio Code.
o Open the folder that you have created on the desktop via VStudio.
Open terminal via visual studio code, then follow the below instructions.
o In the terminal, navigate to the folder containing your Python script and run:
python -m venv myenv
o Right click on the folder where your python files are present, copy the relative path.
o Go to terminal type: cd ‘relative path of python files folder’. Click enter.
(python files folder means: The place where you save your python code.)
If the terminal doesn’t allow you to create the environment or activate the environment that you have
created then use the below:
The following will allow all local scripts to execute on the VM, irrespective of whether they're
signed or not:
Open terminal or command prompt,(Run as Administrator) then paste the below command.
Package -1 - Tensorflow
What is TensorFlow?
TensorFlow is an open-source machine learning framework developed by Google that facilitates building, training, and
deploying machine learning and deep learning models.
It offers flexible and comprehensive tools for deep learning research and production, allowing for easy scalability and
deployment on various platforms and devices.
TensorFlow is used in diverse applications including natural language processing, computer vision, speech recognition, and
robotics across industries such as healthcare, finance, and technology.
TensorFlow uses computational graphs to represent and optimize mathematical operations, enabling efficient training and
inference of models on CPUs, GPUs, and TPUs.
TensorFlow was developed by the Google Brain team and was first released in November 2015.
TensorFlow was created by Google Brain, a research team within Google that focuses on machine learning and artificial
intelligence.
TensorFlow Sub-Packages:
• TensorFlow Core: The foundational library providing low-level APIs for building and training machine learning models.
• TensorFlow Keras: A high-level API for building and training deep learning models, integrated into TensorFlow as tf.keras.
• TensorFlow Hub: A library for reusable machine learning modules, facilitating easy sharing and usage of pre-trained
models.
• TensorFlow Extended (TFX): A production-ready platform for managing and deploying machine learning pipelines in
production environments.
• TensorFlow Lite: A lightweight solution for deploying machine learning models on mobile and edge devices.
• TensorFlow.js: A library for running machine learning models directly in the browser or on Node.js.
• TensorFlow Probability: A library for probabilistic reasoning and statistical analysis, extending TensorFlow with
probabilistic models and methods.
• TensorFlow Federated: A framework for federated learning, allowing for decentralized machine learning model training
across multiple devices.
• TensorFlow Graphics: Provides tools for incorporating computer graphics and geometric transformations into TensorFlow
models.
Package -2 - NumPy
Why?
NumPy (Numerical Python) is essential for efficient numerical computation in Python. It provides support for large, multi-
dimensional arrays and matrices, along with a collection of mathematical functions to operate on these arrays.
Where?
NumPy is used in scientific computing, data analysis, machine learning, and any domain requiring fast numerical computations.
It's a foundational library for libraries like Pandas, SciPy, and scikit-learn.
How?
NumPy provides powerful array objects (ndarray), and functions for mathematical operations, linear algebra, random number
generation, and more. It integrates seamlessly with other scientific libraries in Python.
When?
Use NumPy when you need to perform high-performance numerical operations, manipulate large datasets, or require support
for mathematical functions and algorithms.
Who?
NumPy was created by Travis Olliphant and is now maintained by a large community of contributors and the NumPy
developers' team.
Sub-packages
1. numpy.core: Contains the core functionalities of NumPy including the ndarray object, array operations, and broadcasting.
2. numpy.lib: Provides utility functions and tools, such as mathematical functions, array manipulation routines, and
input/output support.
3. numpy.fft: Implements fast Fourier transform algorithms and tools for signal processing in the frequency domain.
4. numpy.linalg: Offers linear algebra routines including matrix decomposition, eigenvalue computations, and matrix
operations.
5. numpy.random: Supplies functions for generating random numbers, including distributions and random sampling.
6. numpy.polynomial: Contains functions for polynomial operations, including polynomial fitting, roots, and evaluations.
7. numpy.ma: Provides a masked array class for handling arrays with missing or invalid entries.
8. numpy.testing: Includes tools for writing and running tests to verify the correctness of NumPy functions and user code.
9. numpy.distutils: A collection of utilities for building and distributing NumPy extensions and related software.
Package -3 - Matplotlib
What:
Matplotlib is a comprehensive library for creating static, animated, and interactive visualizations in Python. It provides a variety
of plotting functions for generating charts, graphs, and figures.
Why:
Matplotlib is widely used for data visualization because it allows users to create high-quality plots and graphs that can be
customized in detail. It is particularly useful for visualizing data distributions, trends, and patterns, which is essential for data
analysis, scientific research, and reporting.
How:
To use Matplotlib, you typically start by importing the library and then create plots using functions from the pyplot module,
which provides a MATLAB-like interface for plotting. You can generate a wide range of plots such as line graphs, bar charts,
scatter plots, histograms, and more. Customization options are extensive, allowing you to modify plot styles, colors, labels, and
other graphical elements.
Where:
Matplotlib is available through the Python Package Index (PyPI) and can be installed using package managers like pip.
It is commonly used in data science, machine learning, and scientific computing projects to create visualizations for analysis and
presentation. Matplotlib integrates well with other libraries like NumPy and pandas, and can generate plots for use in Jupyter
notebooks, web applications, and standalone graphical interfaces.
Experiment -3 DL-#3.1 AI & ML (AITK)
3. Train the model to add two numbers and report the result.
Aim:
To create and train a simple neural network using Keras to predict the sum of two input numbers. Here
the neural network is designed to learn this relationship by being trained on randomly generated data.
Description:
Defining and training a simple neural network using Keras to approximate the sum of two random input
numbers. We begin by generating random training and test data, where each sample consists of two random numbers
and their sum as the target. The neural network model is built with a sequential structure, featuring an input layer that
accepts two features, a hidden layer with 10 neurons using the ReLU(Rectified Linear Unit) activation function, and an
output layer with a single neuron that predicts the sum. The model is compiled with the Adam optimizer and a mean
squared error loss function. It is then trained on the generated data for 100 epochs with a batch size of 10. After
training, the model's performance is evaluated on test data, and the test loss is printed. Finally, the model predicts the
sum for five new random samples, comparing the predicted sums with the actual sums, and printing the results.
Algorithm:
Algorithm Steps
import numpy as np
def generate_data(num_samples):
X = np.random.rand(num_samples, 2)
y = np.sum(X, axis=1)
return X, y
# Prepare data
num_samples = 1000
model = Sequential()
model.add(Dense(10, activation='relu'))
model.add(Dense(1))
model.compile(optimizer='adam', loss='mean_squared_error')
# Make predictions
y_pred = model.predict(X_new)
for i in range(len(X_new)):
4. Train the model to multiply two matrices and report the result using keras.
Aim:
To develop and train a neural network using Keras to predict the product of two 3x3 matrices. The model will
be trained on a dataset of randomly generated matrices and their corresponding products, with the goal of learning
the matrix multiplication operation.
Description:
In this experiment, a neural network will be trained to approximate matrix multiplication results using Keras.
We begin by generating a dataset of 100,000 samples, each consisting of two 3x3 matrices and their product. These
matrices will be normalized, and the data will be reshaped to fit the model’s input requirements. The model will be
constructed using a Sequential architecture with three Dense layers, including ReLU activations and a final linear
layer to predict the matrix product. After training the model for 100 epochs, it will be evaluated for loss and mean
absolute error. To test the model, new matrices will be provided, and the predicted product will be compared to the
actual product obtained via traditional matrix multiplication, demonstrating the model's capability to generalize and
predict matrix products.
Algorithm:
• Create Random Matrices: Generate two sets of random matrices, X1 and X2, each with
dimensions (num_samples, matrix_size, matrix_size).
• Compute Matrix Products: For each pair of matrices, compute their matrix product to form the
target output y. The result y has dimensions (num_samples, matrix_size, matrix_size).
Normalize Data:
• Normalize Input Matrices: Scale the elements of matrices X1 and X2 by dividing by 10.0 to
bring values into the range [0, 1].
• Normalize Output: Scale the matrix product y by dividing by 100.0, assuming the maximum
possible value is 100.
• Reshape Input Matrices: Flatten each matrix in X1 and X2 from shape (matrix_size, matrix_size) to
shape (matrix_size*matrix_size,).
• Concatenate Inputs: Combine the flattened matrices from X1 and X2 into a single array X with
shape (num_samples, matrix_size*matrix_size*2).
• Reshape Output: Flatten each matrix in y to shape (matrix_size*matrix_size,).
Define Model Architecture: Create a Sequential model with the following layers:
• Input Layer: Accepts input with shape (matrix_size*matrix_size*2,).
• Hidden Layers: Two dense layers with 128 units each and ReLU activation functions.
• Output Layer: A dense layer with matrix_size*matrix_size units and a linear activation function to
output the flattened matrix product.
• Choose Optimizer and Loss Function: Use the Adam optimizer and mean squared error
(MSE) as the loss function. Include mean absolute error (MAE) as an additional metric.
• Fit the Model: Train the model using the concatenated input data X and the flattened output y,
for 100 epochs with a batch size of 32. Use 20% of the data for validation.
• Assess Performance: Evaluate the model on the training data to get the loss and MAE metrics.
• Normalize Input Matrices: Scale the input matrices A and B by dividing by 10.0.
• Concatenate and Flatten: Flatten the normalized matrices and concatenate them.
• Predict and De-normalize: Use the trained model to predict the matrix product, and then scale
the output by multiplying by 100.0 to revert to the original range.
Output Results:
• Print Matrices and Results: Display the input matrices A and B, the predicted matrix product,
and the actual matrix product computed using NumPy's dot function.
Program:
import numpy as np
num_samples = 100000
X1 = X1 / 10.0
X2 = X2 / 10.0
X1 = X1.reshape(num_samples, matrix_size*matrix_size)
X2 = X2.reshape(num_samples, matrix_size*matrix_size)
y = y.reshape(num_samples, matrix_size*matrix_size)
model = Sequential([
Dense(128, activation='relu'),
Dense(128, activation='relu'),
Dense(matrix_size*matrix_size, activation='linear')
])
# Step 5: Predict
A_norm = A / 10.0
B_norm = B / 10.0
prediction = model.predict(input_data)
return prediction
predicted_product = multiply_matrices(model, A, B)
print(f"Matrix A:\n{A}")
print(f"Matrix B:\n{B}")
print(f"Predicted Product:\n{predicted_product}")
Result:
The neural network successfully learned to predict the product of two 3x3 matrices. The model's performance was
evaluated using Mean Squared Error (MSE) and Mean Absolute Error (MAE), providing a measure of prediction accuracy.
The trained model demonstrated its capability to approximate the matrix multiplication operation effectively, with predictions
closely matching actual matrix products.
Experiment -5 DL-#3.1 AI & ML (AITK)
Aim:
To build and train a neural network model that can classify numbers as prime or non-prime based on their
binary representations. The model to be designed with multiple dense layers, PReLU activation, and dropout for
regularization, and it is trained using binary cross-entropy loss to optimize its accuracy. The objective is to evaluate the
model's ability to correctly identify prime numbers, with a focus on precision, recall, and F-score metrics, and to analyse
the model's performance through the training history and predictions.
Description:
A neural network is designed and trained to predict whether a number is prime based on its binary
representation. The model utilizes a feedforward architecture with several Dense layers, PReLU activation functions,
and Dropout layers to prevent overfitting. The dataset was generated by encoding numbers from 2 up to 16384 into
binary and labelling them as prime or not. After training the model for 100 epochs, its performance is evaluated on
numbers from 2 to 100. The results shows the model's accuracy in classifying prime numbers, with metrics including
precision, recall, and F1 score calculated to assess its effectiveness. The training history is visualized to track the model’s
loss over epochs.
Algorithm:
1. Initialization:
2. Define Parameters:
4. Encode Numbers:
• Compile the model with RMSprop optimizer, binary_crossentropy loss function, and
accuracy as the evaluation metric.
• Initialize counters for errors, correct predictions, true positives (tp), false negatives
(fn), and false positives (fp).
• Iterate over numbers from 2 to 100:
1. Convert the number to its binary form using bin_encode(i).
2. Use the trained model to predict whether the number is prime.
3. Compare the prediction with the actual prime status using prime_encode(i).
4. Update counters based on the prediction outcome (correct, tp, fn, fp).
• Calculate precision, recall, and F-score using the updated counters.
• Print the number of errors, correct predictions, and the F-score.
• Define a function plot_history(history) to plot the training and validation loss over
epochs.
• Use matplotlib to generate and display the plot, showing the model's loss during
training.
• Execute the code to build, train, and evaluate the model, and visualize the loss curve
during training.
Program:
import numpy as np
seed = 7
np.random.seed(seed)
# Parameters
max_number = 2 ** num_digits
def prime_list():
primes = [2, 3]
is_prime = True
for p in primes:
if p * p > n:
break
if n % p == 0:
is_prime = False
break
if is_prime:
primes.append(n)
return primes
primes = prime_list()
def prime_encode(i):
def bin_encode(i):
# Create dataset
def create_dataset():
x, y = [], []
x.append(bin_encode(i))
y.append(prime_encode(i))
model = Sequential()
model.add(Dense(units=100, input_dim=num_digits))
model.add(PReLU())
model.add(Dropout(rate=0.2))
model.add(Dense(units=50))
model.add(PReLU())
model.add(Dropout(rate=0.2))
model.add(Dense(units=25))
model.add(PReLU())
model.add(Dropout(rate=0.2))
model.add(Dense(units=1))
model.add(Activation("sigmoid"))
model.compile(optimizer='RMSprop',
loss='binary_crossentropy',
metrics=['accuracy'])
validation_split=0.1, verbose=1)
errors, correct = 0, 0
tp, fn, fp = 0, 0, 0
x = np.array(bin_encode(i)).reshape(-1, num_digits)
y_pred = model.predict(x)[0][0]
obs = prime_encode(i)
if pred == obs:
correct += 1
else:
errors += 1
tp += 1
fn += 1
fp += 1
def plot_history(history):
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.title('Model Loss')
plt.xlabel('Epoch')
plt.ylabel('Loss')
plt.savefig('model_loss.png')
plt.show()
plot_history(history)
Output: (Write only the highlighted output in the observation and draw the graph)
Epoch 1/100
116/116 ━━━━━━━━━━━━━━━━━━━━ 1s 5ms/step - accuracy: 0.8539 - loss: 0.4054 - val_a
ccuracy: 0.8938 - val_loss: 0.2933
Epoch 2/100
116/116 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step - accuracy: 0.8823 - loss: 0.2988 - val_a
ccuracy: 0.8938 - val_loss: 0.2596
Epoch 3/100
116/116 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step - accuracy: 0.8818 - loss: 0.2816 - val_a
ccuracy: 0.8938 - val_loss: 0.2595
Epoch 4/100
116/116 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step - accuracy: 0.8828 - loss: 0.2784 - val_a
ccuracy: 0.8938 - val_loss: 0.2591
Epoch 5/100
116/116 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step - accuracy: 0.8783 - loss: 0.2810 - val_a
ccuracy: 0.8938 - val_loss: 0.2593
Epoch 6/100
116/116 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step - accuracy: 0.8853 - loss: 0.2723 - val_a
ccuracy: 0.8938 - val_loss: 0.2600
Epoch 7/100
116/116 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step - accuracy: 0.8849 - loss: 0.2712 - val_a
ccuracy: 0.8938 - val_loss: 0.2587
Epoch 8/100
116/116 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step - accuracy: 0.8838 - loss: 0.2716 - val_a
ccuracy: 0.8938 - val_loss: 0.2589
Epoch 9/100
116/116 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step - accuracy: 0.8809 - loss: 0.2782 - val_a
ccuracy: 0.8938 - val_loss: 0.2585
Epoch 10/100
116/116 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step - accuracy: 0.8801 - loss: 0.2772 - val_a
ccuracy: 0.8938 - val_loss: 0.2594
Epoch 11/100
116/116 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step - accuracy: 0.8837 - loss: 0.2737 - val_a
ccuracy: 0.8938 - val_loss: 0.2591
Epoch 12/100
116/116 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step - accuracy: 0.8811 - loss: 0.2739 - val_a
ccuracy: 0.8938 - val_loss: 0.2595
Epoch 13/100
116/116 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step - accuracy: 0.8836 - loss: 0.2741 - val_a
ccuracy: 0.8938 - val_loss: 0.2607
Epoch 14/100
116/116 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step - accuracy: 0.8847 - loss: 0.2719 - val_a
ccuracy: 0.8938 - val_loss: 0.2603
Epoch 15/100
116/116 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step - accuracy: 0.8784 - loss: 0.2818 - val_a
ccuracy: 0.8938 - val_loss: 0.2606
Epoch 16/100
116/116 ━━━━━━━━━━━━━━━━━━━━ 0s 4ms/step - accuracy: 0.8860 - loss: 0.2682 - val_a
ccuracy: 0.8938 - val_loss: 0.2591
Epoch 17/100
116/116 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step - accuracy: 0.8847 - loss: 0.2705 - val_a
ccuracy: 0.8938 - val_loss: 0.2596
Epoch 18/100
116/116 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step - accuracy: 0.8847 - loss: 0.2718 - val_a
ccuracy: 0.8938 - val_loss: 0.2589
Epoch 19/100
116/116 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step - accuracy: 0.8834 - loss: 0.2714 - val_a
ccuracy: 0.8938 - val_loss: 0.2589
Epoch 20/100
116/116 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step - accuracy: 0.8818 - loss: 0.2730 - val_a
ccuracy: 0.8938 - val_loss: 0.2603
Epoch 21/100
116/116 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step - accuracy: 0.8817 - loss: 0.2728 - val_a
ccuracy: 0.8938 - val_loss: 0.2589
Epoch 22/100
116/116 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step - accuracy: 0.8839 - loss: 0.2712 - val_a
ccuracy: 0.8938 - val_loss: 0.2596
Epoch 23/100
116/116 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step - accuracy: 0.8851 - loss: 0.2689 - val_a
ccuracy: 0.8938 - val_loss: 0.2618
Epoch 24/100
116/116 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step - accuracy: 0.8863 - loss: 0.2671 - val_a
ccuracy: 0.8938 - val_loss: 0.2594
Epoch 25/100
116/116 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step - accuracy: 0.8849 - loss: 0.2712 - val_a
ccuracy: 0.8938 - val_loss: 0.2597
Epoch 26/100
116/116 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step - accuracy: 0.8818 - loss: 0.2743 - val_a
ccuracy: 0.8938 - val_loss: 0.2588
Epoch 27/100
116/116 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step - accuracy: 0.8833 - loss: 0.2721 - val_a
ccuracy: 0.8938 - val_loss: 0.2589
Epoch 28/100
116/116 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step - accuracy: 0.8841 - loss: 0.2699 - val_a
ccuracy: 0.8938 - val_loss: 0.2742
Epoch 29/100
116/116 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step - accuracy: 0.8848 - loss: 0.2700 - val_a
ccuracy: 0.8938 - val_loss: 0.2604
Epoch 30/100
116/116 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step - accuracy: 0.8828 - loss: 0.2740 - val_a
ccuracy: 0.8938 - val_loss: 0.2635
Epoch 31/100
116/116 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step - accuracy: 0.8810 - loss: 0.2746 - val_a
ccuracy: 0.8938 - val_loss: 0.2591
Epoch 32/100
116/116 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step - accuracy: 0.8857 - loss: 0.2680 - val_a
ccuracy: 0.8938 - val_loss: 0.2606
Epoch 33/100
116/116 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step - accuracy: 0.8836 - loss: 0.2737 - val_a
ccuracy: 0.8938 - val_loss: 0.2609
Epoch 34/100
116/116 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step - accuracy: 0.8785 - loss: 0.2768 - val_a
ccuracy: 0.8938 - val_loss: 0.2591
Epoch 35/100
116/116 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step - accuracy: 0.8812 - loss: 0.2754 - val_a
ccuracy: 0.8938 - val_loss: 0.2624
Epoch 36/100
116/116 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step - accuracy: 0.8854 - loss: 0.2698 - val_a
ccuracy: 0.8938 - val_loss: 0.2608
Epoch 37/100
116/116 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step - accuracy: 0.8791 - loss: 0.2772 - val_a
ccuracy: 0.8938 - val_loss: 0.2627
Epoch 38/100
116/116 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - accuracy: 0.8856 - loss: 0.2670 - val_a
ccuracy: 0.8938 - val_loss: 0.2587
Epoch 39/100
116/116 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step - accuracy: 0.8858 - loss: 0.2660 - val_a
ccuracy: 0.8938 - val_loss: 0.2624
Epoch 40/100
116/116 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step - accuracy: 0.8799 - loss: 0.2761 - val_a
ccuracy: 0.8938 - val_loss: 0.2599
Epoch 41/100
116/116 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step - accuracy: 0.8835 - loss: 0.2698 - val_a
ccuracy: 0.8938 - val_loss: 0.2704
Epoch 42/100
116/116 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step - accuracy: 0.8866 - loss: 0.2657 - val_a
ccuracy: 0.8938 - val_loss: 0.2599
Epoch 43/100
116/116 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step - accuracy: 0.8864 - loss: 0.2649 - val_a
ccuracy: 0.8938 - val_loss: 0.2598
Epoch 44/100
116/116 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step - accuracy: 0.8824 - loss: 0.2713 - val_a
ccuracy: 0.8938 - val_loss: 0.2597
Epoch 45/100
116/116 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step - accuracy: 0.8852 - loss: 0.2680 - val_a
ccuracy: 0.8938 - val_loss: 0.2613
Epoch 46/100
116/116 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step - accuracy: 0.8789 - loss: 0.2797 - val_a
ccuracy: 0.8938 - val_loss: 0.2714
Epoch 47/100
116/116 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step - accuracy: 0.8861 - loss: 0.2683 - val_a
ccuracy: 0.8938 - val_loss: 0.2616
Epoch 48/100
116/116 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step - accuracy: 0.8826 - loss: 0.2729 - val_a
ccuracy: 0.8938 - val_loss: 0.2615
Epoch 49/100
116/116 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step - accuracy: 0.8833 - loss: 0.2717 - val_a
ccuracy: 0.8938 - val_loss: 0.2603
Epoch 50/100
116/116 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step - accuracy: 0.8819 - loss: 0.2713 - val_a
ccuracy: 0.8938 - val_loss: 0.2603
Epoch 51/100
116/116 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step - accuracy: 0.8832 - loss: 0.2702 - val_a
ccuracy: 0.8938 - val_loss: 0.2629
Epoch 52/100
116/116 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step - accuracy: 0.8848 - loss: 0.2675 - val_a
ccuracy: 0.8938 - val_loss: 0.2603
Epoch 53/100
116/116 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step - accuracy: 0.8891 - loss: 0.2617 - val_a
ccuracy: 0.8938 - val_loss: 0.2600
Epoch 54/100
116/116 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - accuracy: 0.8836 - loss: 0.2695 - val_a
ccuracy: 0.8938 - val_loss: 0.2596
Epoch 55/100
116/116 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step - accuracy: 0.8788 - loss: 0.2747 - val_a
ccuracy: 0.8938 - val_loss: 0.2606
Epoch 56/100
116/116 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step - accuracy: 0.8842 - loss: 0.2691 - val_a
ccuracy: 0.8938 - val_loss: 0.3085
Epoch 57/100
116/116 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step - accuracy: 0.8811 - loss: 0.2725 - val_a
ccuracy: 0.8938 - val_loss: 0.2633
Epoch 58/100
116/116 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step - accuracy: 0.8841 - loss: 0.2683 - val_a
ccuracy: 0.8938 - val_loss: 0.2607
Epoch 59/100
116/116 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step - accuracy: 0.8826 - loss: 0.2710 - val_a
ccuracy: 0.8938 - val_loss: 0.2663
Epoch 60/100
116/116 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step - accuracy: 0.8830 - loss: 0.2700 - val_a
ccuracy: 0.8938 - val_loss: 0.2594
Epoch 61/100
116/116 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step - accuracy: 0.8811 - loss: 0.2739 - val_a
ccuracy: 0.8938 - val_loss: 0.2636
Epoch 62/100
116/116 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step - accuracy: 0.8822 - loss: 0.2718 - val_a
ccuracy: 0.8938 - val_loss: 0.2606
Epoch 63/100
116/116 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step - accuracy: 0.8843 - loss: 0.2667 - val_a
ccuracy: 0.8938 - val_loss: 0.2636
Epoch 64/100
116/116 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step - accuracy: 0.8798 - loss: 0.2740 - val_a
ccuracy: 0.8938 - val_loss: 0.2598
Epoch 65/100
116/116 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step - accuracy: 0.8810 - loss: 0.2721 - val_a
ccuracy: 0.8938 - val_loss: 0.2591
Epoch 66/100
116/116 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step - accuracy: 0.8848 - loss: 0.2676 - val_a
ccuracy: 0.8938 - val_loss: 0.2611
Epoch 67/100
116/116 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step - accuracy: 0.8832 - loss: 0.2708 - val_a
ccuracy: 0.8938 - val_loss: 0.2606
Epoch 68/100
116/116 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step - accuracy: 0.8777 - loss: 0.2771 - val_a
ccuracy: 0.8938 - val_loss: 0.2841
Epoch 69/100
116/116 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step - accuracy: 0.8895 - loss: 0.2631 - val_a
ccuracy: 0.8938 - val_loss: 0.2620
Epoch 70/100
116/116 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step - accuracy: 0.8832 - loss: 0.2681 - val_a
ccuracy: 0.8938 - val_loss: 0.2648
Epoch 71/100
116/116 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step - accuracy: 0.8880 - loss: 0.2650 - val_a
ccuracy: 0.8938 - val_loss: 0.2656
Epoch 72/100
116/116 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step - accuracy: 0.8818 - loss: 0.2705 - val_a
ccuracy: 0.8938 - val_loss: 0.2633
Epoch 73/100
116/116 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step - accuracy: 0.8848 - loss: 0.2637 - val_a
ccuracy: 0.8938 - val_loss: 0.2615
Epoch 74/100
116/116 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step - accuracy: 0.8823 - loss: 0.2695 - val_a
ccuracy: 0.8938 - val_loss: 0.2624
Epoch 75/100
116/116 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step - accuracy: 0.8793 - loss: 0.2755 - val_a
ccuracy: 0.8938 - val_loss: 0.2612
Epoch 76/100
116/116 ━━━━━━━━━━━━━━━━━━━━ 0s 4ms/step - accuracy: 0.8832 - loss: 0.2704 - val_a
ccuracy: 0.8938 - val_loss: 0.2615
Epoch 77/100
116/116 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step - accuracy: 0.8864 - loss: 0.2644 - val_a
ccuracy: 0.8938 - val_loss: 0.2759
Epoch 78/100
116/116 ━━━━━━━━━━━━━━━━━━━━ 0s 4ms/step - accuracy: 0.8835 - loss: 0.2730 - val_a
ccuracy: 0.8938 - val_loss: 0.2674
Epoch 79/100
116/116 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step - accuracy: 0.8859 - loss: 0.2648 - val_a
ccuracy: 0.8938 - val_loss: 0.2638
Epoch 80/100
116/116 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step - accuracy: 0.8801 - loss: 0.2738 - val_a
ccuracy: 0.8938 - val_loss: 0.2617
Epoch 81/100
116/116 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step - accuracy: 0.8798 - loss: 0.2731 - val_a
ccuracy: 0.8938 - val_loss: 0.2614
Epoch 82/100
116/116 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step - accuracy: 0.8823 - loss: 0.2705 - val_a
ccuracy: 0.8938 - val_loss: 0.2629
Epoch 83/100
116/116 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step - accuracy: 0.8811 - loss: 0.2705 - val_a
ccuracy: 0.8938 - val_loss: 0.2611
Epoch 84/100
116/116 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step - accuracy: 0.8864 - loss: 0.2651 - val_a
ccuracy: 0.8938 - val_loss: 0.2614
Epoch 85/100
116/116 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step - accuracy: 0.8781 - loss: 0.2755 - val_a
ccuracy: 0.8938 - val_loss: 0.2650
Epoch 86/100
116/116 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step - accuracy: 0.8777 - loss: 0.2747 - val_a
ccuracy: 0.8938 - val_loss: 0.2618
Epoch 87/100
116/116 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step - accuracy: 0.8826 - loss: 0.2673 - val_a
ccuracy: 0.8938 - val_loss: 0.2628
Epoch 88/100
116/116 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step - accuracy: 0.8808 - loss: 0.2729 - val_a
ccuracy: 0.8938 - val_loss: 0.2734
Epoch 89/100
116/116 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step - accuracy: 0.8815 - loss: 0.2697 - val_a
ccuracy: 0.8938 - val_loss: 0.3050
Epoch 90/100
116/116 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step - accuracy: 0.8853 - loss: 0.2674 - val_a
ccuracy: 0.8938 - val_loss: 0.2994
Epoch 91/100
116/116 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step - accuracy: 0.8872 - loss: 0.2640 - val_a
ccuracy: 0.8938 - val_loss: 0.2688
Epoch 92/100
116/116 ━━━━━━━━━━━━━━━━━━━━ 0s 4ms/step - accuracy: 0.8863 - loss: 0.2645 - val_a
ccuracy: 0.8938 - val_loss: 0.2820
Epoch 93/100
116/116 ━━━━━━━━━━━━━━━━━━━━ 0s 4ms/step - accuracy: 0.8849 - loss: 0.2703 - val_a
ccuracy: 0.8938 - val_loss: 0.2639
Epoch 94/100
116/116 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step - accuracy: 0.8859 - loss: 0.2644 - val_a
ccuracy: 0.8938 - val_loss: 0.2621
Epoch 95/100
116/116 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step - accuracy: 0.8844 - loss: 0.2678 - val_a
ccuracy: 0.8938 - val_loss: 0.2636
Epoch 96/100
116/116 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step - accuracy: 0.8767 - loss: 0.2772 - val_a
ccuracy: 0.8938 - val_loss: 0.2747
Epoch 97/100
116/116 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - accuracy: 0.8861 - loss: 0.2667 - val_a
ccuracy: 0.8938 - val_loss: 0.2637
Epoch 98/100
116/116 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step - accuracy: 0.8828 - loss: 0.2701 - val_a
ccuracy: 0.8938 - val_loss: 0.2659
Epoch 99/100
116/116 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step - accuracy: 0.8829 - loss: 0.2682 - val_a
ccuracy: 0.8938 - val_loss: 0.2672
Epoch 100/100
116/116 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step - accuracy: 0.8803 - loss: 0.2706 - val_a
ccuracy: 0.8938 - val_loss: 0.2629
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 73ms/step
2 1 0 0.383954
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 57ms/step
3 1 0 0.38080835
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 27ms/step
4 0 0 0.026962234
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 16ms/step
5 1 0 0.37707672
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 16ms/step
6 0 0 0.011072025
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 24ms/step
7 1 0 0.3830549
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 22ms/step
8 0 0 0.0120331
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 16ms/step
9 0 0 0.3547673
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 65ms/step
10 0 0 0.0058986926
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 32ms/step
11 1 0 0.37851676
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 17ms/step
12 0 0 0.0007476649
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 16ms/step
13 1 0 0.3722207
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 20ms/step
14 0 0 0.0001849889
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 17ms/step
15 0 0 0.37702525
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 26ms/step
16 0 0 0.053087883
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 33ms/step
17 1 0 0.3585548
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 17ms/step
18 0 0 0.011597598
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 17ms/step
19 1 0 0.36810765
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 72ms/step
20 0 0 0.0060820696
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 17ms/step
21 0 0 0.3612024
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 17ms/step
22 0 0 0.0009023409
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 66ms/step
23 1 0 0.36651924
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 24ms/step
24 0 0 0.00094541954
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 15ms/step
25 0 0 0.3545844
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 16ms/step
26 0 0 0.00022615517
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 32ms/step
27 0 0 0.36603397
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 24ms/step
28 0 0 7.8036515e-05
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 16ms/step
29 1 0 0.36277783
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 69ms/step
30 0 0 1.0681912e-05
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 23ms/step
31 1 0 0.36980352
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 25ms/step
32 0 0 0.02106894
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 26ms/step
33 0 0 0.3540372
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 40ms/step
34 0 0 0.02433002
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 25ms/step
35 0 0 0.3681119
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 16ms/step
36 0 0 0.0008112356
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 56ms/step
37 1 0 0.36699072
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 16ms/step
38 0 0 0.0009043261
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 22ms/step
39 0 0 0.37308374
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 17ms/step
40 0 0 0.00042471796
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 18ms/step
41 1 0 0.35018626
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 16ms/step
42 0 0 0.00026290998
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 16ms/step
43 1 0 0.3605038
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 44ms/step
44 0 0 3.3106484e-05
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 16ms/step
45 0 0 0.35945165
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 16ms/step
46 0 0 1.0414666e-05
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 26ms/step
47 1 0 0.36650753
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 22ms/step
48 0 0 0.004686651
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 16ms/step
49 0 0 0.3368874
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 24ms/step
50 0 0 0.0030867003
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 50ms/step
51 0 0 0.3509847
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 24ms/step
52 0 0 0.0010047829
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 25ms/step
53 1 0 0.35520494
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 81ms/step
54 0 0 0.0003207096
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 17ms/step
55 0 0 0.36132163
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 26ms/step
56 0 0 0.0001574932
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 21ms/step
57 0 0 0.33596253
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 25ms/step
58 0 0 3.6496283e-05
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 16ms/step
59 1 0 0.35572907
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 16ms/step
60 0 0 2.1125838e-05
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 56ms/step
61 1 0 0.3538455
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 24ms/step
62 0 0 3.7109357e-06
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 16ms/step
63 0 0 0.35979313
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 73ms/step
64 0 0 0.015715573
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 24ms/step
65 0 0 0.354191
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 24ms/step
66 0 0 0.0075669964
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 93ms/step
67 1 0 0.35365075
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 20ms/step
68 0 0 0.001518746
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 16ms/step
69 0 0 0.35232738
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 17ms/step
70 0 0 0.0003496702
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 26ms/step
71 1 0 0.36261925
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 16ms/step
72 0 0 0.000907353
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 29ms/step
73 1 0 0.35280442
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 88ms/step
74 0 0 0.00045616785
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 32ms/step
75 0 0 0.3603841
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 16ms/step
76 0 0 0.0001191719
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 17ms/step
77 0 0 0.36085296
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 17ms/step
78 0 0 1.5682637e-05
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 20ms/step
79 1 0 0.36770338
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 16ms/step
80 0 0 0.005096048
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 65ms/step
81 0 0 0.32275093
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 24ms/step
82 0 0 0.0010224471
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 17ms/step
83 1 0 0.34251547
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 80ms/step
84 0 0 0.00068605406
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 24ms/step
85 0 0 0.3481401
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 14ms/step
86 0 0 8.467569e-05
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 16ms/step
87 0 0 0.35359338
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 48ms/step
88 0 0 0.0001489397
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 24ms/step
89 1 0 0.33779338
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 25ms/step
90 0 0 2.1645501e-05
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 25ms/step
91 0 0 0.34851038
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 25ms/step
92 0 0 3.6603742e-05
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 27ms/step
93 0 0 0.34789127
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 26ms/step
94 0 0 2.835524e-06
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 50ms/step
95 0 0 0.3567752
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 26ms/step
96 0 0 0.00074291835
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 16ms/step
97 1 0 0.31832483
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 82ms/step
98 0 0 0.0008699383
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 26ms/step
99 0 0 0.34367406
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 17ms/step
100 0 0 8.1156795e-05
Errors: 25 Correct: 74
Result:
The neural network was successfully trained to classify prime numbers with high accuracy. The model achieved a notable
F1 score, indicating effective performance in distinguishing prime numbers from non-primes. The training and evaluation metrics
confirm the model's reliability in predicting prime numbers.