NNDL Recordfinal
NNDL Recordfinal
NNDL Recordfinal
DEPARTMENT OF
Name ...................................................
Reg.No ...................................................
Year/Semester ...................................................
TABLE OF CONTENTS
Aim:-
Algorithm:
1
Program:
# importing packages
import tensorflow as tf
# creating a scalar
scalar = tf.constant(7)
scalar
scalar.ndim
# create a vector
vector = tf.constant([10, 10])
# creating a matrix
matrix = tf.constant([[1, 2], [3, 4]])
print(matrix)
print('the number of dimensions of a matrix is :\
'+str(matrix.ndim))
2
# creating two tensors
matrix = tf.constant([[1, 2],[3, 4]])
matrix1 = tf.constant([[2, 4],[6, 8]])
# creating a matrix
matrix = tf.constant([[1, 2], [3, 4]])
# transpose of the matrix
print(tf.transpose(matrix))
# creating a matrix
matrix = tf.constant([[1, 2], [3, 4]])
Output:
3
[18 32]], shape=(2, 2), dtype=int32)
4
Result:-
A python program to demonstrate Simple Vector Addition has been implemented and
executed successfully.
5
Ex.No: 2
Implement a regression model in Keras.
Date:
Aim:-
Algorithm:
6
Program:
import numpy as np
import tensorflow as tf
from tensorflow import keras
from sklearn.model_selection import train_test_split
from sklearn.metrics import mean_squared_error
import matplotlib.pyplot as plt
import pandas as pd
model = keras.Sequential([
keras.layers.Input(shape=(1,)),
keras.layers.Dense(1)
])
model.compile(optimizer='adam', loss='mean_squared_error')
y_pred = model.predict(X_test)
7
Output
Epoch 1/200
1/1 [==============================] - 1s 1s/step - loss: 6270383616.0000
Epoch 2/200
1/1 [==============================] - 0s 3ms/step - loss: 6270382080.0000
Epoch 3/200
1/1 [==============================] - 0s 16ms/step - loss: 6270380544.0000
Epoch 4/200
1/1 [==============================] - 0s 0s/step - loss: 6270380544.0000
Epoch 5/200
1/1 [==============================] - 0s 0s/step - loss: 6270378496.0000
Epoch 6/200
1/1 [==============================] - 0s 0s/step - loss: 6270377984.0000
Epoch 7/200
1/1 [==============================] - 0s 0s/step - loss: 6270376448.0000
Epoch 8/200
1/1 [==============================] - 0s 16ms/step - loss: 6270375936.0000
Epoch 9/200
1/1 [==============================] - 0s 0s/step - loss: 6270375424.0000
Epoch 10/200
1/1 [==============================] - 0s 0s/step - loss: 6270373376.0000
………….
Epoch 200/200
1/1 [==============================] - 0s 16ms/step - loss: 6270372352.0000
Mean Squared Error: 7429406541.1625395
8
9
Result:
Thus the program for Linear Regression using Keras had been implemented
successfully and the output was verified.
10
Ex.No: 3
Implement a perceptron in TensorFlow/Keras Environment
Date:
Aim:-
Algorithm:
11
Program:
import tensorflow as tf
import numpy as np
# Make predictions
predictions = model.predict(X)
Output:
Epoch 1/100
4/4 [==============================] - 1s 16ms/step - loss: 0.6888 - accuracy:
0.4300
Epoch 2/100
4/4 [==============================] - 0s 0s/step - loss: 0.6876 - accuracy: 0.4300
Epoch 3/100
12
4/4 [==============================] - 0s 0s/step - loss: 0.6867 - accuracy: 0.4300
Epoch 4/100
4/4 [==============================] - 0s 0s/step - loss: 0.6856 - accuracy: 0.4300
Epoch 5/100
4/4 [==============================] - 0s 7ms/step - loss: 0.6846 - accuracy:
0.4300
Epoch 6/100
4/4 [==============================] - 0s 0s/step - loss: 0.6835 - accuracy: 0.4300
Epoch 7/100
4/4 [==============================] - 0s 5ms/step - loss: 0.6826 - accuracy:
0.4300
Epoch 8/100
4/4 [==============================] - 0s 0s/step - loss: 0.6816 - accuracy: 0.4300
Epoch 9/100
4/4 [==============================] - 0s 0s/step - loss: 0.6808 - accuracy: 0.4300
Epoch 10/100
4/4 [==============================] - 0s 5ms/step - loss: 0.6798 - accuracy:
0.4300
Epoch 11/100
4/4 [==============================] - 0s 0s/step - loss: 0.6788 - accuracy: 0.4300
Epoch 12/100
4/4 [==============================] - 0s 5ms/step - loss: 0.6780 - accuracy:
0.4300
Epoch 13/100
4/4 [==============================] - 0s 0s/step - loss: 0.6771 - accuracy: 0.4300
Epoch 14/100
4/4 [==============================] - 0s 0s/step - loss: 0.6763 - accuracy: 0.4300
Epoch 15/100
4/4 [==============================] - 0s 5ms/step - loss: 0.6755 - accuracy:
0.4300
Epoch 16/100
4/4 [==============================] - 0s 0s/step - loss: 0.6745 - accuracy: 0.4300
Epoch 17/100
4/4 [==============================] - 0s 5ms/step - loss: 0.6738 - accuracy:
0.4300
Epoch 18/100
4/4 [==============================] - 0s 0s/step - loss: 0.6729 - accuracy: 0.4300
Epoch 19/100
4/4 [==============================] - 0s 0s/step - loss: 0.6722 - accuracy: 0.4300
Epoch 20/100
4/4 [==============================] - 0s 5ms/step - loss: 0.6714 - accuracy:
0.4300
Epoch 21/100
4/4 [==============================] - 0s 0s/step - loss: 0.6707 - accuracy: 0.4300
13
Epoch 22/100
4/4 [==============================] - 0s 5ms/step - loss: 0.6699 - accuracy:
0.4300
Epoch 23/100
4/4 [==============================] - 0s 0s/step - loss: 0.6689 - accuracy: 0.4300
Epoch 24/100
4/4 [==============================] - 0s 0s/step - loss: 0.6681 - accuracy: 0.4300
Epoch 25/100
4/4 [==============================] - 0s 5ms/step - loss: 0.6672 - accuracy:
0.4300
Epoch 26/100
4/4 [==============================] - 0s 0s/step - loss: 0.6662 - accuracy: 0.4300
Epoch 27/100
4/4 [==============================] - 0s 0s/step - loss: 0.6654 - accuracy: 0.4400
Epoch 28/100
4/4 [==============================] - 0s 5ms/step - loss: 0.6644 - accuracy:
0.4400
Epoch 29/100
4/4 [==============================] - 0s 0s/step - loss: 0.6635 - accuracy: 0.4400
Epoch 30/100
4/4 [==============================] - 0s 5ms/step - loss: 0.6625 - accuracy:
0.4400
Epoch 31/100
4/4 [==============================] - 0s 0s/step - loss: 0.6618 - accuracy: 0.4400
14
15
Result
Thus the program to implement a perceptron in TensorFlow/Keras Environment had been
implemented successfully and the output was verified.
16
Ex.No: 4
Data Pre-Processing in Machine Learning
Date:
Aim:-
To write a python program to demonstrate how to do Pre-Processing in machine
learning.
Procedure:-
1. Import the package pandas.
2. Import module label encoder from sklearn.preprocessing
3. Download the csv file 'insurance.csv' and stored in the drive E:
4. Display the data set which is not used for machine learning process.
5. To drop the duplicates from all the features use drop_duplicates and label encoder.
6. To transform the data in all the features use transform and fit method.
7. Display the data set after removing duplicates and transforms data which is used for
machine learning process.
17
Program:-
#Data preprocessing
#importing the libraries
import numpy as np
import matplotlib.pyplot
import pandas as pd
18
OUTUPUT
19
20
Result:-
A python program to demonstrate data pre-processing in machine learning has been
implemented and executed successfully.
21
Ex.No: 5
Simple Linear Regression using ScikitLearn
Date:
Aim:-
To write a python program to implement Simple Linear Regression technique on real time
dataset in machine learning.
Procedure:-
22
Program:-
# Importing the libraries
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
# Splitting the dataset into the Training set and Test set
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 1/3, random_state = 0)
23
Output:-
24
Result:-
A python program for simple linear regression on real time dataset in machine
learning has been implemented and executed successfully.
25
Ex.No: 6 Naïve Bayes Algorithm
Date:
Aim:-
To write a python program to implement Naïve Bayes Algorithm on real time dataset in
machine learning.
Procedure:-
1. Import read_csv from the package pandas.
2. Import numpy, matplotlib from python library
3. Download the csv file ‘diabetes.csv' and stored in the corresponding folder
4. Assign the input variables in X and the target variables in y.
5. Load the data set into the data frame using read_csv.
6. from sklearn.model_selection import train_test_split and partition the dataset into
training and testing dataset.
7. Using sklearn.preprocessing library and import StandardScaler function to transform
the dataset.
8. Fitting Naive Bayes algorithm to the Training set using sklearn.naive_bayes library
and GaussianNB method.
9. Get the confusion matrix for the dataset using sklearn.metrics library using
confusion_matrix method
10. Calculate the performance metrics for the naïve bayes model.
26
Program:-
# Naive Bayes Classification
# Importing the libraries
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
import seaborn as sns;
# Splitting the dataset into the Training set and Test set
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 1/3, random_state = 2)
# Feature Scaling
from sklearn.preprocessing import StandardScaler
sc = StandardScaler()
X_train = sc.fit_transform(X_train)
X_test = sc.transform(X_test)
27
variety=pd.DataFrame()
variety['Pregnancies']=[6]
variety['Glucose']=[148]
variety['BloodPressure']=[72]
variety['SkinThickness']=[35]
variety['Insulin']=[0]
variety['BMI']=[33.6]
variety['DiabetesPedigreeFunction']=[0.627]
variety['Age']=[50]
print(variety)
y_pred1=classifier.predict(variety)
print("the Outcome of the Patient is:")
print(y_pred1)
Output:-
Performance metrics
28
False Negative Rate [0.17125382 0.34594595]
False Discovery Rate [0.19104478 0.31638418]
Accuracry [0.765625 0.765625]
29
Result:-
A python program for implementing Naïve Bayes Algorithm on real time dataset in
machine learning has been implemented and executed successfully.
30
Ex.No: 7 K-Nearest Neighbour Classifier(Ensembling Techniques)
Date:
Aim:-
Procedure:-
1. Import read_csv from the package pandas.
2. Import numpy, matplotlib from python library
3. Download the csv file ‘diabetes.csv' and stored in the corresponding folder
4. Assign the input variables in X and the target variables in y.
5. Load the data set into the data frame using read_csv.
6. from sklearn.model_selection import train_test_split and partition the dataset into
training and testing dataset.
7. Using sklearn.preprocessing library and import StandardScaler function to transform
the dataset.
8. Fitting K-Nearest algorithm to the Training set using sklearn.neighbors library and
NeighborsClassifier method.
9. Get the confusion matrix for the dataset using sklearn.metrics library using
confusion_matrix method
10. Calculate the performance metrics for the K-Nearest Neighbor model.
31
Program:-
# Naive Bayes Classification
# Importing the libraries
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
import seaborn as sns;
# Splitting the dataset into the Training set and Test set
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 1/3, random_state = 2)
# Feature Scaling
from sklearn.preprocessing import StandardScaler
sc = StandardScaler()
X_train = sc.fit_transform(X_train)
X_test = sc.transform(X_test)
32
TN = cm.sum() - (FP + FN + TP)
FP = FP.astype(float)
FN = FN.astype(float)
TP = TP.astype(float)
TN = TN.astype(float)
# Sensitivity, hit rate, recall, or true positive rate
TPR = TP/(TP+FN)
print("Recall",TPR)
# Specificity or true negative rate
TNR = TN/(TN+FP)
print("Specificity",TNR)
# Precision or positive predictive value
PPV = TP/(TP+FP)
print("Precision",PPV)
# Negative predictive value
NPV = TN/(TN+FN)
print("Negative Predictive Value",NPV)
# Fall out or false positive rate
FPR = FP/(FP+TN)
print("False Positive Rate",FPR)
# False negative rate
FNR = FN/(TP+FN)
print("False Negative Rate",FNR)
# False discovery rate
FDR = FP/(TP+FP)
print("False Discovery Rate",FDR)
# Overall accuracy for each class
ACC = (TP+TN)/(TP+FP+FN+TN)
print("Accuracry",ACC)
33
Output:-
Recall [0.88990826 0.61621622]
Specificity [0.61621622 0.88990826]
Precision [0.8038674 0.76]
Negative Predictive Value [0.76 0.8038674]
False Positive Rate [0.38378378 0.11009174]
False Negative Rate [0.11009174 0.38378378]
False Discovery Rate [0.1961326 0.24]
Accuracry [0.79101562 0.79101562]
34
35
Result:-
A python program for implementing KNN (Ensemble Learning) on real time dataset
in machine learning has been implemented and executed successfully.
36
Ex.No: 8 Decision Tree Classifier
Date:
Aim:-
To write a python program to implement Decision Tree Algorithm on real time
dataset in machine learning.
Procedure:-
1. Import read_csv from the package pandas.
2. Import numpy, matplotlib from python library
3. Download the csv file ‘diabetes.csv' and stored in the corresponding folder
4. Assign the input variables in X and the target variables in y.
5. Load the data set into the data frame using read_csv.
6. from sklearn.model_selection import train_test_split and partition the dataset into
training and testing dataset.
7. Using sklearn.preprocessing library and import StandardScaler function to transform
the dataset.
8. Fitting Decision Tree Induction algorithm to the Training set using sklearn.tree
library and DecisionTreeClassifier method.
9. Get the confusion matrix for the dataset using sklearn.metrics library using
confusion_matrix method
10. Calculate the performance metrics for the Decision Tree model.
37
Program:-
# Decision Tree Classification
# Importing the libraries
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
import seaborn as sns
# Splitting the dataset into the Training set and Test set
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 1/3, random_state = 0)
# Draw graph
graph = pydotplus.graph_from_dot_data(dot_data)
graph.write_png('diabeties.png')
38
# Show graph
Image(graph.create_png())
39
Output:-
Pictorial Representation of the Diabeties Decision Tree
40
Recall [0.90214067 0.69189189]
Specificity [0.69189189 0.90214067]
Precision [0.83806818 0.8 ]
Negative Predictive Value [0.8 0.83806818]
False Positive Rate [0.30810811 0.09785933]
False Negative Rate [0.09785933 0.30810811]
False Discovery Rate [0.16193182 0.2 ]
Accuracry [0.82617188 0.82617188]
41
Result:-
A python program for data pre-processing technique (Standardization) on real time dataset in
machine learning has been implemented and executed successfully.
42
Ex.No: 9 Random Forest Algorithm (Ensemble Learning)
Date:
Aim:-
Procedure:-
43
Program:-
# Importing the libraries
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
# Splitting the dataset into the Training set and Test set
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 1/3, random_state = 0)
44
# Sensitivity, hit rate, recall, or true positive rate
TPR = TP/(TP+FN)
print("Recall",TPR)
# Specificity or true negative rate
TNR = TN/(TN+FP)
print("Specificity",TNR)
# Precision or positive predictive value
PPV = TP/(TP+FP)
print("Precision",PPV)
# Negative predictive value
NPV = TN/(TN+FN)
print("Negative Predictive Value",NPV)
# Fall out or false positive rate
FPR = FP/(FP+TN)
print("False Positive Rate",FPR)
# False negative rate
FNR = FN/(TP+FN)
print("False Negative Rate",FNR)
# False discovery rate
FDR = FP/(TP+FP)
print("False Discovery Rate",FDR)
# Overall accuracy for each class
ACC = (TP+TN)/(TP+FP+FN+TN)
print("Accuracry",ACC)
45
Output:-
46
47
Performance Metrics for Random Forest Classifier
Recall [0.92705167 0.69945355]
Specificity [0.69945355 0.92705167]
Precision [0.84722222 0.84210526]
Negative Predictive Value [0.84210526 0.84722222]
False Positive Rate [0.30054645 0.07294833]
False Negative Rate [0.07294833 0.30054645]
False Discovery Rate [0.15277778 0.15789474]
Accuracy [0.84570312 0.84570312]
48
Result:-
A python program for Ensemble Learning using Random Forest on real time dataset in
machine learning has been implemented and executed successfully.
49
Ex.No: 10 Support Vector Machine
Date:
Aim:-
Procedure:-
1. Import read_csv from the package pandas.
2. Import numpy, matplotlib from python library
3. Download the csv file ‘diabetes.csv' and stored in the corresponding folder
4. Assign the input variables in X and the target variables in y.
5. Load the data set into the data frame using read_csv.
6. from sklearn.model_selection import train_test_split and partition the dataset into
training and testing dataset.
7. Using sklearn.preprocessing library and import StandardScaler function to transform
the dataset.
8. Fitting Ensemble Learning algorithm to the Training set using sklearn.svm library
and using the SVC method.
9. Get the confusion matrix for the dataset using sklearn.metrics library using
confusion_matrix method
10. Calculate the performance metrics for the Support Vector model.
50
Program:-
# Naive Bayes Classification
# Importing the libraries
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
import seaborn as sns;
# Splitting the dataset into the Training set and Test set
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 1/3, random_state = 2)
# Feature Scaling
from sklearn.preprocessing import StandardScaler
sc = StandardScaler()
X_train = sc.fit_transform(X_train)
X_test = sc.transform(X_test)
51
sns.heatmap(cm,fmt=".0f",xticklabels=['Diabeties_yes','Diabeties_no'],yticklabels=['Diabetie
s_yes','Diabeties_no'],annot=True)
#sns.heatmap(cm,fmt=".0f",annot=True)
y_pred1=classifier.predict(variety)
print("the Outcome of the Patient is:")
print(y_pred1)
52
Output:-
53
Performance Metrics for Support Vector Machine
Result:-
A python program for Support Vector Machine on real time dataset in machine learning
has been implemented and executed successfully.
54
Ex.No: 11 K-Means Clustering
Date:
Aim:-
Procedure:-
1. Import read_csv from the package pandas.
2. Import numpy, matplotlib from python library
3. Download the csv file ‘Iris.csv' and stored in the corresponding folder
4. Assign all the input variables in X.
5. Load the data set into the data frame using read_csv.
6. Import the library sklearn.cluster and use the method KMeans to implement K-
Means clustering for the Iris Dataset.
7. Visualize the clusters using Scatter plot method.
55
Program:-
# K-Means Clustering
# Importing the libraries
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
import re
colnames=['SepalLengthCm','SepalWidthCm','PetalLengthCm','PetalWidthCm','Species']
# Importing the dataset
dataset =
pd.read_csv('Iris1.csv',names=colnam
es,dtype=float)
X = dataset.iloc[:,:].values
dataset.dtypes
56
OUTPUT:
57
Result:-
A python program for K-Means clustering on real time dataset in machine learning has
been implemented and executed successfully.
58
Ex.No: 12 Implement a Feed-Forward Network in TensorFlow/Keras.
Date:
Aim:-
Procedure:-
59
Program:-
# Importing the libraries
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
import tensorflow as tf
# Splitting the dataset into the Training set and Test set
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.2, random_state = 0)
# Feature Scaling
from sklearn.preprocessing import StandardScaler
sc = StandardScaler()
X_train = sc.fit_transform(X_train)
X_test = sc.transform(X_test)
60
# Part 3 - Making the predictions and evaluating the model
print(graph_source.source)
classifier.get_weights()
y_pred1=classifier.predict(variety)
print("the Outcome of the Patient is:")
print(y_pred1)
61
TP = np.diag(cm)
TN = cm.sum() - (FP + FN + TP)
FP = FP.astype(float)
FN = FN.astype(float)
TP = TP.astype(float)
TN = TN.astype(float)
# Sensitivity, hit rate, recall, or true positive rate
TPR = TP/(TP+FN)
print("Recall",TPR)
# Specificity or true negative rate
TNR = TN/(TN+FP)
print("Specificity",TNR)
# Precision or positive predictive value
PPV = TP/(TP+FP)
print("Precision",PPV)
# Negative predictive value
NPV = TN/(TN+FN)
print("Negative Predictive Value",NPV)
# Fall out or false positive rate
FPR = FP/(FP+TN)
print("False Positive Rate",FPR)
# False negative rate
FNR = FN/(TP+FN)
print("False Negative Rate",FNR)
# False discovery rate
FDR = FP/(TP+FP)
print("False Discovery Rate",FDR)
# Overall accuracy for each class
ACC = (TP+TN)/(TP+FP+FN+TN)
print("Accuracry",ACC)
62
Output:-
63
Performance Metrics for the ANN
64
Result:-
65
Ex.No: 13 Improve the Deep learning model by fine tuning hyper parameters.
Date:
Aim:-
Procedure:-
66
Program
# Part 1 - Data Preprocessing
# Splitting the dataset into the Training set and Test set
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.2, random_state = 0)
# Feature Scaling
from sklearn.preprocessing import StandardScaler
sc = StandardScaler()
X_train = sc.fit_transform(X_train)
X_test = sc.transform(X_test)
67
classifier.compile(Adam(learning_rate = 0.01), "categorical_crossentropy", metrics =
["accuracy"])
print(graph_source.source)
classifier.get_weights()
y_pred1=classifier.predict(variety)
if y_pred1[0][0]>0.5:
68
print("the Outcome of the Patient is:Diabeties")
else:
print("The patient has no Diabeties")
print(y_pred1)
69
Output:
70
Performance Metrics for the ANN
71
Result:-
72
Ex.No: 14 Implement an Image Classifier using CNN in TensorFlow/Keras
Date:
Aim:-
Procedure:-
73
Program
# Convolutional Neural Network
# Step 1 - Convolution
cnn.add(tf.keras.layers.Conv2D(filters=32, kernel_size=3, activation='relu', input_shape=[64,
64, 3]))
# Step 2 - Pooling
cnn.add(tf.keras.layers.MaxPool2D(pool_size=2, strides=2))
# Step 3 - Flattening
cnn.add(tf.keras.layers.Flatten())
74
cnn.add(tf.keras.layers.Dense(units=128, activation='relu'))
# Training the CNN on the Training set and evaluating it on the Test set
cnn.fit(x = training_set, validation_data = test_set, epochs = 25)
import numpy as np
import keras.utils as image
test_image = image.load_img('dataset/single/cord2.jpg', target_size = (64, 64))
test_image = image.img_to_array(test_image)
test_image = np.expand_dims(test_image, axis = 0)
result = cnn.predict(test_image)
print(result)
a=training_set.class_indices
print(a)
if result[0] == 0:
prediction = 'cats'
elif result[0]==1:
prediction = 'dogs'
else:
prediction='Not defined'
print(prediction)
75
FP = cm.sum(axis=0) - np.diag(cm)
FN = cm.sum(axis=1) - np.diag(cm)
TP = np.diag(cm)
TN = cm.sum() - (FP + FN + TP)
FP = FP.astype(float)
FN = FN.astype(float)
TP = TP.astype(float)
TN = TN.astype(float)
# Sensitivity, hit rate, recall, or true positive rate
TPR = TP/(TP+FN)
print("Recall",TPR)
# Specificity or true negative rate
TNR = TN/(TN+FP)
print("Specificity",TNR)
# Precision or positive predictive value
PPV = TP/(TP+FP)
print("Precision",PPV)
# Negative predictive value
NPV = TN/(TN+FN)
print("Negative Predictive Value",NPV)
# Fall out or false positive rate
FPR = FP/(FP+TN)
print("False Positive Rate",FPR)
# False negative rate
FNR = FN/(TP+FN)
print("False Negative Rate",FNR)
# False discovery rate
FDR = FP/(TP+FP)
print("False Discovery Rate",FDR)
# Overall accuracy for each class
ACC = (TP+TN)/(TP+FP+FN+TN)
print("Accuracry",ACC)
76
Output
77
78
Recall [0.82242991 0.61702128]
Specificity [0.61702128 0.82242991]
Precision [0.83018868 0.60416667]
Negative Predictive Value [0.60416667 0.83018868]
False Positive Rate [0.38297872 0.17757009]
False Negative Rate [0.17757009 0.38297872]
False Discovery Rate [0.16981132 0.39583333]
Accuracy [0.75974026 0.75974026]
1/1 [==============================] - 0s 31ms/step
The patient has no Diabeties
79
80
81
Result:-
82
Ex.No: 15 Image generation using GAN
Date:
Aim:-
Procedure:-
83
Program
import tensorflow as tf
from tensorflow.keras.layers import Dense, Flatten, Reshape
from tensorflow.keras.models import Sequential
from tensorflow.keras.optimizers import Adam
from tensorflow.keras.datasets import mnist
import numpy as np
import matplotlib.pyplot as plt
import os
os.environ["KMP_DUPLICATE_LIB_OK"]="TRUE"
generator = build_generator()
discriminator = build_discriminator()
84
# Training parameters
batch_size = 64
epochs = 10000
sample_interval = 1000
85
Output
86
Figure 3: Image Generated for the third iteration (interval 3000)
87
Figure 5: Image Generated for the fifth iteration (interval 5000)
88
Figure 4: Image Generated for the seventh iteration (interval 7000)
89
Figure 4: Image Generated for the nineth iteration (interval 4000)
90
Result:-
91