Cs3491-Aiml Lab Manual
Cs3491-Aiml Lab Manual
CS3491
LAB MANUAL
II YEAR IT
R-2021
Syllabus
1
COURSE OBJECTIVES:
COURSE OUTCOMES:
TEXT BOOKS:
1. Stuart Russell and Peter Norvig, “Artificial Intelligence – A Modern Approach”,
Fourth Edition, Pearson Education, 2021.
2. Ethem Alpaydin, “Introduction to Machine Learning”, MIT Press, Fourth Edition,
2020.
REFERENCES:
2. Kevin Night, Elaine Rich, and Nair B., “Artificial Intelligence”, McGraw Hill, 2008
2
4. Deepak Khemani, “Artificial Intelligence”, Tata McGraw Hill Education,
2013 (http://nptel.ac.in/)
5. Christopher M. Bishop, “Pattern Recognition and Machine Learning”, Springer, 2006.
6. Tom Mitchell, “Machine Learning”, McGraw Hill, 3rd Edition,1997.
7. Charu C. Aggarwal, “Data Classification Algorithms and Applications”, CRC Press, 2014
8. Mehryar Mohri, Afshin Rostamizadeh, Ameet Talwalkar, “Foundations of Machine
Learning”, MIT Press, 2012.
9. Ian Goodfellow, Yoshua Bengio, Aaron Courville, “Deep Learning”, MIT Press, 2016
3
CO-PO - Mapping - Laboratory Experiments
Exercises CO PO
PO1,PO2,PO3,PO4,PO5,PO9, PO10,PO11,PO12
12 CO5
4
1. Implementation of Uninformed search algorithms (BFS, DFS)
Ex.1.1 Date:
Aim:
To implement uninformed search algorithms such as Breadth First and Depth First Search using
python.
Algorithm:
Program:
graph = {
'5' : ['3','7'],
'7' : ['8'],
'2' : ['1','6'],
'4' : ['8'],
'8' : [],
'1':[],
'6':[]
5
}
visited.append(node)
queue.append(node)
m = queue.pop(0)
visited.append(neighbour)
queue.append(neighbour)
# Driver Code
OutPut:
6
Ex.1.2 Date:
Algorithm:
Program:
graph = {
'5' : ['3','7'],
'7' : ['8'],
'2' : ['1','6'],
'4' : ['8'],
'8' : [],
'1': [],
'6': []
7
if node not in visited:
print (node)
visited.add(node)
# Driver Code
OutPut:
Result:
Thus the Uninformed searching algorithms such as BFS and DFS were implemented and
executed successfully.
8
2. Implementation of Informed search algorithms (A*, memory-bounded A*)
Ex.2 Date:
Aim:
Algorithm:
1. Start
2. Begin with the start node in the open list and an empty closed list.
3. While there are nodes in the open list, continue iterating.
4. Choose the node with the lowest combined cost and heuristic value (f(n) = g(n) + h(n)).
5. If the current node is the goal, reconstruct and return the path.
6. Remove the current node from the open list, add it to the closed list, and expand its
neighbors.
7. Update the cost (g(n)), heuristic estimate (h(n)), and parent of each neighbor if necessary.
8. Add neighbors to the open list if they are not already present.
9. After finding the goal, reconstruct the path from the start node to the goal node.
10. Stop
9
Program:
import heapq
path = deque()
path.appendleft(current)
current = came_from[current]
return path
open_set = []
came_from = {}
g_score = {start: 0}
while open_set:
current = heapq.heappop(open_set)[1]
if current == goal:
10
if 0 <= neighbor[0] < len(grid) and 0 <= neighbor[1] < len(grid[0]) and not
grid[neighbor[0]][neighbor[1]]:
tentative_g_score = g_score[current] + 1
came_from[neighbor] = current
g_score[neighbor] = tentative_g_score
return None
grid = [
[0, 0, 0, 0, 0],
[0, 1, 0, 1, 0],
[0, 1, 0, 1, 0],
[0, 1, 0, 1, 0],
[0, 0, 0, 0, 0]
start = (0, 0)
goal = (4, 4)
print(list(path))
OutPut:
[(0, 1), (0, 2), (0, 3), (0, 4), (1, 4), (2, 4), (3, 4), (4, 4)]
11
2.2 Memory bounded A* Search algorithm
Algorithm:
1. Start
2. Create an open list (priority queue) to store nodes to be explored.
3. Create a closed set to store explored nodes.
4. Push the start node onto the open list with a cost estimate (f_cost) calculated as the sum
of its g_cost (cost from start to current node) and h_cost (heuristic estimate from current
node to goal).
5. While the open list is not empty:
a. Pop the node with the lowest f_cost from the open list.
b. This node becomes the current node.
6. If the current node is the goal node,
a. Reconstruct the path by following the parent pointers from the goal node back to
the start node.
b. Return the path and the total cost.
7. else,
a. add the current node to the closed set.
8. Generate successor nodes by applying valid actions (neighbors) to the current node.
9. For each successor node:
a. Calculate its g_cost (cost from start to successor node) by adding the cost of
reaching the successor node from the current node.
b. Calculate its h_cost (heuristic estimate from successor node to goal).
c. Calculate its f_cost as the sum of g_cost and h_cost.
d. If the f_cost of the successor node is within the memory limit:
i. If the successor node has not been explored (not in the closed set), add it
to the open list.
e. Return:
10. If the goal is not found within the memory limit or there is no valid path,
a. return None.
11. Stop.
12
Program:
import heapq
class Node:
def __init__(self, state, g_cost, h_cost, parent=None):
self.state = state
self.g_cost = g_cost
self.h_cost = h_cost
self.f_cost = g_cost + h_cost
self.parent = parent
while open_list:
current_node = heapq.heappop(open_list)
closed_set.add(current_node.state)
if current_node.state == goal:
# Reconstruct path
path = []
cost = current_node.g_cost
while current_node:
path.append(current_node.state)
current_node = current_node.parent
return path[::-1], cost
13
# Example usage:
def heuristic(state, goal):
return abs(state[0] - goal[0]) + abs(state[1] - goal[1])
def neighbors(state):
x, y = state
return [(x-1, y), (x+1, y), (x, y-1), (x, y+1)]
start = (0, 0)
goal = (4, 4)
memory_limit = 10 # Example memory limit
Output:
Result:
14
3. Implementation of Naïve Bayes models
Ex.3 Date:
Aim:
Algorithm:
Step 5: Train and Predict given data using Naive Bayes Classifier
Program:
# Sample data
X = [[1, 'S'], [1, 'M'], [1, 'M'], [1, 'S'], [1, 'S'],
[2, 'S'], [2, 'M'], [2, 'M'], [2, 'L'], [2, 'L'],
[3, 'L'], [3, 'M'], [3, 'M'], [3, 'L'], [3, 'L']]
15
y = ['N', 'N', 'Y', 'Y', 'N', 'N', 'N', 'Y', 'Y', 'Y', 'Y', 'Y', 'Y', 'Y', 'N']
le = LabelEncoder()
model = GaussianNB()
model.fit(X_train, y_train)
# Predicting
y_pred = model.predict(X_test)
# Accuracy
print("Accuracy:", accuracy)
OutPut:
Accuracy:0.3333333
Result:
Thus the implementation of a Naïve Bayes classifier model executed in Python using scikit-
learn
16
4. Implementation of Bayesian network model
Ex.3 Date:
Aim:
The aim of implementing Bayesian Networks is to model the probabilistic relationships between
a set of variables
Algorithm:
1. Define the variables: The first step in implementing a Bayesian Network is to define the
variables that will be used in the model. Each variable should be clearly defined and its possible
states should be enumerated.
2. Determine the relationships between variables: The next step is to determine the
probabilistic relationships between the variables. This can be done by identifying the causal
relationships between the variables or by using data to estimate the conditional probabilities of
each variable given its parents.
4. Assign probabilities to the variables: Once the structure of the Bayesian Network has
been defined, the probabilities of each variable must be assigned. This can be done by using
expert knowledge, data, or a combination of both.
5. Inference: Inference refers to the process of using the Bayesian Network to make
predictions or draw conclusions. This can be done by using various inference algorithms, such as
variable elimination or belief propagation.
6. Learning: Learning refers to the process of updating the probabilities in the Bayesian
Network based on new data. This can be done using various learning algorithms, such as
maximum likelihood or Bayesian learning.
17
7. Evaluation: The final step in implementing a Bayesian Network is to evaluate its
performance. This can be done by comparing the predictions of the model to actual data and
computing various performance metrics, such as accuracy or precision.
Program:
import numpy as np
import csv
import pandas as pd
('heartdisease','thalach'),('heartdisease','chol')])
model.fit(heartDisease,estimator=MaximumLikelihoodEstimator)
18
print('\n Inferencing with Bayesian Network:')
HeartDisease_infer =
VariableElimination(model)
q=HeartDisease_infer.query(variables=['heartdisease'],evidence ={'age':28})
print(q['heartdisease'])
print(q['heartdisease'])
Output:
age sex cptrestbps ...slope
cathalheartdisease 0 63 1 1 145 ...
3060
1 67 1 4 160 ... 2 3 3 2
2 67 1 4 120 ... 2 2 7 1
3 37 1 3 130 ... 3 0 3 0
4 41 0 2 130 ... 1 0 3 0
[5 rows x 14 columns]
19
╒════════════════╤═════════════════════╕
│ heartdisease │ phi(heartdisease) │
╞════════════════╪═════════════════════╡
│ heartdisease_0 │ 0.6791 │
├────────────────┼─────────────────────┤
│ heartdisease_1 │ 0.1212 │
├────────────────┼─────────────────────┤
│ heartdisease_2 │ 0.0810 │
├────────────────┼─────────────────────┤
│ heartdisease_3 │ 0.0939 │
├────────────────┼─────────────────────┤
│ heartdisease_4 │ 0.0247 │
╘════════════════╧═════════════════════╛
╒════════════════╤═════════════════════╕
│ heartdisease │ phi(heartdisease) │
╞════════════════╪═════════════════════╡
│ heartdisease_0 │ 0.5400 │
├────────────────┼─────────────────────┤
│ heartdisease_1 │ 0.1533 │
├────────────────┼─────────────────────┤
│ heartdisease_2 │ 0.1303 │
├────────────────┼─────────────────────┤
20
│ heartdisease_3 │ 0.1259 │
├────────────────┼─────────────────────┤
│ heartdisease_4 │ 0.0506 │
╘════════════════╧═════════════════════╛
Result:
21
5. Build Regression models
Aim:
To build regression models such as locally weighted linear regression and plot the necessary
graphs.
Algorithm:
1. Read the Given data Sample to X and the curve (linear or non-linear) to Y
6. Prediction = x0*β.
Program:
from math import ceil import numpy as np from scipy import linalg
r = int(ceil(f * n))
for i in range(n):
22
weights = delta * w[:, i]
beta = linalg.solve(A, b)
residuals = y - yest
s = np.median(np.abs(residuals))
x = np.linspace(0, 2 * math.pi, n)
iterations=3
plt.plot(x,yest,"b-")
23
Output:
Result:
Thus the program to implement non-parametric Locally Weighted Regression algorithm in order
to fit data points with graph visualization have been executed successfully.
24
6. Build decision trees and random forests.
Aim:
To implement the concept of decision trees with suitable dataset from real world
problems using the CART algorithm.
Algorithm:
2. On each iteration of the algorithm, it iterates through the very unused attribute of the set S
and calculates Gini index of this attribute.
3. Gini Index works with the categorical target variable “Success” or “Failure”. It
performs only Binary splits.
4. The set S is then split by the selected attribute to produce a subset of the data.
5. The algorithm continues to recur on each subset, considering only attributes never selected
before.
Program:
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
data = pd.read_csv('/Users/ganesh/PycharmProjects/DecisionTree/Social_Network_Ads.csv')
data.head()
25
feature_cols = ['Age', 'EstimatedSalary']
x = data.iloc[:, [2, 3]].values
y = data.iloc[:, 4].values
y_train) y_pred =
classifier.predict(x_test)
26
plt.xlim(x1.min(), x1.max())
plt.ylim(x2.min(), x2.max())
for i, j in enumerate(np.unique(y_set)):
plt.show()
dot_data = StringIO()
y_pred = classifier.predict(x_test)
dot_data = StringIO()
27
export_graphviz(classifier, out_file=dot_data, filled=True, rounded=True, special_characters=True,
feature_names=feature_cols, class_names=['0', '1'])
graph = pydotplus.graph_from_dot_data(dot_data.getvalue())
Image(graph.write_png('opt_decisiontree_gini.png'))
28
Optimized output of decision tree using Gini Index (CART):
Result:
Thus the program to implement the concept of decision trees with suitable dataset from real
world problems using CART algorithm have been executed successfully.
29
7. Build SVM models.
Aim:
To create a machine learning model which classifies the Spam and Ham E-Mails from
a given dataset using the Support Vector Machine algorithm.
Algorithm:
2. Read the given csv file which contains the emails which are both spam and
ham.
3. Gather all the words given in that dataset and Identify the stop words with a
mean distribution.
5. Display the accuracy and f1 score and print the confusion matrix for the
classification of spam and ham.
Program:
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
import string
30
from sklearn.metrics import classification_report, confusion_matrix
from sklearn.naive_bayes import MultinomialNB
from sklearn.metrics import roc_curve, auc
from sklearn import metrics
from sklearn import model_selection
from sklearn import svm
class
data_read_write(object):
def init (self):
pass
def read_csv_file(self,
file_link): return
self.data_frame
pass
31
text = " ".join(review for review in data_frame_column) stopwords = set(STOPWORDS)
stopwords.update(["subject"])
wordcloud = WordCloud(width = 1200, height = 800, stopwords=stopwords, max_font_size = 50,
margin=0,background_color = "white").generate(text)
plt.imshow(wordcloud, interpolation='bilinear')
plt.axis("off")
plt.savefig("Distribution.png")
plt.show() wordcloud.to_file(output_image_file)
return
pass
return final_join
data_processed = data_column_text.apply(self.message_cleaning)
return data_processed
class apply_embeddding_and_model(data_read_write):
pass
32
vectorizer = CountVectorizer(min_df=2, analyzer="word", tokenizer=None,
preprocessor=None, stop_words=None)
return vectorizer.fit_transform(v_data_column)
display_labels=class_names,
cmap=plt.cm.Blues,
normalize=normalize)
disp.ax_.set_title(title)
print(title)
print(disp.confusion_matrix)
plt.savefig("SVM.png)
plt.show()
33
ns_probs = [0 for _ in range(len(y_test))]
lr_probs = svm_cv.predict_proba(X_test)
lr_probs = lr_probs[:, 1]
ns_auc = roc_auc_score(y_test, ns_probs)
lr_auc = roc_auc_score(y_test, lr_probs)
print('No Skill: ROC AUC=%.3f' %
(ns_auc)) print('SVM: ROC AUC=%.3f' %
(lr_auc))
ns_fpr, ns_tpr, _ = roc_curve(y_test, ns_probs)
lr_fpr, lr_tpr, _ = roc_curve(y_test, lr_probs)
return
data_obj = data_read_write("emails.csv")
data_frame.tail()
data_frame.describe()
data_frame.info()
data_frame.head()
data_frame.groupby('spam').describe()
data_frame['length'] = data_frame['text'].apply(len)
data_frame['length'].max()
sns.set(rc={'figure.figsize':(11.7,8.27)})
34
ham_messages_length = data_frame[data_frame['spam']==0]
spam_messages_length = data_frame[data_frame['spam']==1]
data_frame[data_frame['spam']==0].text.values
ham_words_length = [len(word_tokenize(title))
for title in data_frame[data_frame['spam']==0].text.values]
spam_words_length = [len(word_tokenize(title))
for title in data_frame[data_frame['spam']==1].text.values]
print(max(ham_words_length))
print(max(spam_words_length))
sns.set(rc={'figure.figsize':(11.7,8.27)})
plt.xlabel('Number of Words')
plt.legend()
plt.savefig("SVMGraph.png"
) plt.show()
def mean_word_length(x):
word_lengths = np.array([])
for word in word_tokenize(x):
word_lengths = np.append(word_lengths, len(word)) return
word_lengths.mean()
35
ham_meanword_length = data_frame[data_frame['spam']==0].text.apply(mean_word_length)
spam_meanword_length = data_frame[data_frame['spam']==1].text.apply(mean_word_length)
num_stop_words = 0
for word in
word_tokenize(x):
if word in stop_words:
num_stop_words += 1
num_total_words += 1
36
print('Ham Mean: {:.3f}'.format(ham_stopwords.values.mean()))
print('Spam Mean: {:.3f}'.format(spam_stopwords.values.mean()))
plt.title('Distribution of Stop-word Ratio')
ham = data_frame[data_frame['spam']==0]
spam = data_frame[data_frame['spam']==1]
spam['length'].plot(bins=60, kind='hist')
ham['length'].plot(bins=60, kind='hist')
data_clean_obj = data_cleaning()
data_frame['clean_text'] = data_clean_obj.apply_to_column(data_frame['text'])
data_frame.head()
data_obj.data_frame.head()
data_obj.write_to_csvfile("processed_file.csv")
cv_object = apply_embeddding_and_model()
spamham_countvectorizer = cv_object.apply_count_vector(data_frame['clean_text'])
X = spamham_countvectorizer
label = data_frame['spam'].values
y = label
cv_object.apply_svm(X,y)
37
Output:
test set:
F1 Score: 0.9776119402985075
Recall: 0.9739776951672863
Precision: 0.9812734082397003
[[0.99429875 0.00570125]
[0.0260223 0.9739777 ]]
38
39
Result:
Thus the program to create a machine learning model which classifies the Spam and Ham E-
Mails from a given dataset using Support Vector Machine algorithm have been successfully
executed.
40
8. Implement ensembling techniques.
Aim:
To implement the ensembling technique of Blending with the given Alcohol QCM
Dataset.
Algorithm:
1. Split the training dataset into train, test and validation dataset.
Program:
import pandas as pd
validation_ratio = 0.20
41
test_ratio = 0.10
model_3.fit(x_train, y_train)
val_pred_3 = model_1.predict(x_val)
test_pred_3 = model_1.predict(x_test)
val_pred_3 = pd.DataFrame(val_pred_3)
test_pred_3 = pd.DataFrame(test_pred_3)
df_val = pd.concat([x_val, val_pred_1, val_pred_2, val_pred_3], axis=1)
df_test = pd.concat([x_test, test_pred_1, test_pred_2, test_pred_3], axis=1)
final_model = LinearRegression()
final_model.fit(df_val, y_val)
final_pred = final_model.predict(df_test)
print(mean_squared_error(y_test, pred_final))
Output:
4790
42
Result:
Thus the program to implement ensembling technique of Blending with the given Alcohol QCM
Dataset has been executed successfully and the output got verified.
43
9. Implement clustering algorithms
Aim:
Algorithm:
Step-3: Take the K nearest neighbors as per the calculated Euclidean distance. Step-4: Among
these k neighbors, count the number of the data points in each category.
Step-5: Assign the new data points to that category for which the number of the neighbor is
maximum.
Program:
44
print("accuracy is") print(classification_report(y_test, y_pred))
Output:
accuracy is
accuracy 0.97 30
Result:
Thus the program to implement k-Nearest Neighbor Algorithm for clustering Iris dataset have
been executed successfully and output got verified.
45
10. Implement EM for Bayesian networks.
Aim:
To implement the EM algorithm for clustering networks using the given dataset.
Algorithm:
Program:
import numpy as np
dataset=load_iris()
# print(dataset)
46
X=pd.DataFrame(dataset.data)
X.columns=['Sepal_Length','Sepal_Width','Petal_Length','Petal_Width']
y=pd.DataFrame(dataset.target)
y.columns=['Targets']
# print(X)
plt.figure(figsize=(14,7))
colormap=np.array(['red','lime','black'])
# REAL PLOT
plt.subplot(1,3,1)
plt.scatter(X.Petal_Length,X.Petal_Width,c=colormap[y.Targets],s=40)
plt.title('Real')
# K-PLOT
plt.subplot(1,3,2)
model=KMeans(n_clusters=3)
model.fit(X)
predY=np.choose(model.labels_,[0,1,2]).astype(np.int64)
plt.scatter(X.Petal_Length,X.Petal_Width,c=colormap[predY],s=40
) plt.title('KMeans')
# GMM PLOT
scaler=preprocessing.StandardScaler()
scaler.fit(X)
xsa=scaler.transform(X)
xs=pd.DataFrame(xsa,columns=X.columns)
gmm=GaussianMixture(n_components=3)
gmm.fit(xs)
y_cluster_gmm=gmm.predict(xs)
plt.subplot(1,3,3)
plt.scatter(X.Petal_Length,X.Petal_Width,c=colormap[y_cluster_gmm],s=40)
plt.title('GMM Classification')
47
Output:
Result:
Thus the program to implement EM Algorithm for clustering networks using the
given dataset has been executed successfully and the output got verified.
48
11. Build simple NN models.
Aim:
Algorithm:
1. Image Acquisition: The first step is to acquire images of paper documents with the help of
optical scanners. This way, an original image can be captured and stored.
2. Pre-processing: The noise level on an image should be optimized and areas outside the text
removed. Pre-processing is especially vital for recognizing handwritten documents that are
more sensitive to noise.
4. Feature Extraction: This step means splitting the input data into a set of features, that is,
to find essential characteristics that make one or another pattern recognizable.
● Starting with the input layer, propagate data forward to the output
layer. This step is the forward propagation.
● Based on the output, calculate the error (the difference between the
predicted and known outcome). The error needs to be minimized.
● Back propagate the error. Find its derivative with respect to each
weight in the network, and update the model.
6. Post processing: This stage is the process of refinement as an OCR model can require
some corrections. However, it isn’t possible to achieve 100% recognition accuracy. The
identification of characters heavily depends on the context.
49
Program:
import tensorflow as tf
matplotlib.use('TkAgg')
NB_EPOCH = 30
BATCH_SIZE = 256
VERBOSE = 2
N_HIDDEN = 512
print(list_datasets())
50
X_test, y_test = extract_test_samples('byclass')
print("test shape: ",X_test.shape)
print("test labels: ",y_test.shape)
#for indexing from 0
y_train = y_train-1
y_test = y_test-1
RESHAPED = len(X_train[0])*len(X_train[1])
X_train = X_train.reshape(len(X_train),
RESHAPED) X_test = X_test.reshape(len(X_test),
RESHAPED) X_train = X_train.astype('float32')
X_test = X_test.astype('float32')
# normalize
X_train /= 255
X_test /= 255
Y_train = np_utils.to_categorical(y_train,
NB_CLASSES) Y_test = np_utils.to_categorical(y_test,
NB_CLASSES) # M_HIDDEN hidden layers
# 35 outputs
51
model.add(Dense(256))
model.add(Activation('relu'))
model.add(Dropout(DROPOUT))
model.add(Dense(NB_CLASSES
))
model.add(Activation('softmax'))
model.summary()
model.compile(loss='categorical_crossentropy',
optimizer=OPTIMIZER,
metrics=['accuracy'])
52
Output:
['balanced', 'byclass', 'bymerge', 'digits', 'letters', 'mnist']
train shape: (697932, 28, 28)
=================================================================
53
activation_4 (Activation) (None, 256) 0
=================================================================
Non-trainable params: 0
54
Result:
Thus the program to implement the neural network model for the given dataset.
55
12. Build deep learning NN models.
Aim:
To implement and build a Convolutional neural network model which predicts the
age and gender of a person using the given pre-trained models.
Algorithm:
Steps in CNN Algorithm:
for i in range(detections.shape[2]):
confidence = detections[0, 0, i, 2] if confidence > conf_threshold:
56
x1 = int(detections[0, 0, i, 3] * frameWidth)
y1 = int(detections[0, 0, i, 4] * frameHeight)
x2 = int(detections[0, 0, i, 5] * frameWidth)
y2 = int(detections[0, 0, i, 6] * frameHeight)
bboxes.append([x1, y1, x2, y2])
cv.rectangle(frameOpencvDnn, (x1, y1), (x2, y2), (0, 255, 0),int(round(frameHeight/150)), 8)
return frameOpencvDnn, bboxes
faceProto = "/content/opencv_face_detector.pbtxt"
faceModel = "/content/opencv_face_detector_uint8.pb"
ageProto = "/content/age_deploy.prototxt"
ageModel = "/content/age_net.caffemodel" genderProto
= "/content/gender_deploy.prototxt" genderModel =
"/content/gender_net.caffemodel"
def age_gender_detector(frame):
# print(bbox)
genderPreds = genderNet.forward()
57
gender = genderList[genderPreds[0].argmax()]
return frameFace
output = age_gender_detector(input)
cv2_imshow(output)
Output:
gender : Male, conf = 1.000
Result:
Thus the program to implement and build a Convolutional neural network model which
predicts the age and gender of a person using the given pre-trained models has been
executed successfully and the output got verified.
58