Ai and ML (Manual) - 1-1
Ai and ML (Manual) - 1-1
AIM:
To implement uninformed search algorithms such as BFS and DFS.
ALGORITHM:
STEP 1: SET STATUS = 1 (ready state) for each node in G
STEP 2: Enqueue the starting node A and set its STATUS = 2 (waiting state)
STEP 3: Repeat Steps 4 and 5 until QUEUE is empty
STEP 4: Dequeue a node N. Process it and set its STATUS = 3 (processed state).
STEP 5: Enqueue all the neighbours of N that are in the ready state (whose STATUS
= 1) and set their STATUS = 2 (waiting state) [END OF LOOP]
STEP 6: EXIT
PROGRAM(BFS):
from collections import defaultdictclass
Graph:
def init (self):
self.graph = defaultdict(list)def
addEdge(self,u,v):
self.graph[u].append(v)def
BFS(self, s):
visited = [False] * (len(self.graph))
queue = []
queue.append(s)
visited[s] = True
while queue:
s = queue.pop(0) print
(s, end = " ") for i in
self.graph[s]:
if visited[i] == False:
queue.append(i)
visited[i] = True
g = Graph()
g.addEdge(0, 1)
g.addEdge(0, 2)
g.addEdge(1, 2)
g.addEdge(2, 0)
g.addEdge(2, 3)
g.addEdge(3, 3)
print ("Following is Breadth First Traversal" " (starting from vertex 2)")
g.BFS(2)
OUTPUT:
Following is Breadth First Traversal (starting from vertex 2)
2031
RESULT:
Thus the uninformed search algorithms such as BFS have been executed successfully and the
output got verified.
Ex.No:1(b)
UNINFORMED SEARCH ALGORITHM
Date: (DEPTH FIRST SEARCH)
AIM:
To implement uninformed search algorithms such as BFS and DFS
ALGORITHM(DFS):
Step 1: SET STATUS = 1 (ready state) for each node in G
Step 2: Push the starting node A on the stack and set its STATUS = 2 (waiting state)
Step 3: Repeat Steps 4 and 5 until STACK is empty
Step 4: Pop the top node N. Process it and set its STATUS = 3 (processed state)
Step 5: Push on the stack all the neighbors of N that are in the ready state (whoseSTATUS = 1)
and set their STATUS = 2 (waiting state) [END OF LOOP]
Step 6: EXIT
PROGRAM(DFS):
from collections import defaultdictclass
Graph:
def init (self):
self.graph = defaultdict(list)def
addEdge(self, u, v):
self.graph[u].append(v)def
DFSUtil(self, v, visited):
visited.add(v)
print(v, end=' ')
for neighbour in self.graph[v]:
if neighbour not in visited:
self.DFSUtil(neighbour, visited)
def DFS(self, v):
visited = set()
self.DFSUtil(v, visited)
if name == " main ":
g = Graph()
g.addEdge(0, 1)
g.addEdge(0, 2)
g.addEdge(1, 2)
g.addEdge(2, 0)
g.addEdge(2, 3)
g.addEdge(3, 3)
print("Following is DFS from (starting from vertex 2)")
g.DFS(2)
OUTPUT:
Following is Depth First Traversal (starting from vertex 2)
2013
RESULT:
Thus the uninformed search algorithms such as DFS have been executed successfully and the
output got verified.
Ex.No:2(a)
IMPLEMENTATION OF INFORMED SEARCH ALGORITHM
Date: (A*)
AIM:
ALGORITHM:
STEP 1: Start
STEP 2: Create a class Node to represent a state in the search. It has attributes like the state
itself, the parent node, the cost to reach the current state, and aheuristic value.
STEP 3: Implement the astar function that takes the start state, goal state, and agraph
representing the connections between states.
STEP 4: Create an empty priority queue to store nodes based on their cost plusheuristic values.
STEP 5: Push the initial node with the start state, no parent, cost of 0, andheuristic of 0
into the priority queue.
STEP 6: Create an empty set to keep track of visited states.
STEP 7: Repeat until the priority queue is empty.
STEP 8: If the current node's state has already been visited, skip to the nextiteration.
STEP 9: Add the current node's state to the visited set.
STEP 10: If no path is found after exhausting all possible nodes, return None.
STEP 11: Define the graph representation, specifying the connections betweenstates and the
associated costs.
STEP 12: Define the start and goal states.
STEP 13: Call the astar function with the start, goal, and graph parameters andstore the
result.
STEP 14: Print the result.
PROGRAM:
import heapq
class Node:
def init (self, state, parent, cost, heuristic):
self.state = state
self.parent = parent
self.cost = cost
self.heuristic = heuristic
}
start = 'A'goal
= 'D'
max_nodes=10
result = astar(start, goal, graph,max_nodes)
print(result)
OUTPUT:
['A','B','C','D']
RESULT:
Thus the program of bounded A* algorithm implementation has been executed
successfully and the output got verified.
Ex.No:2(b) IMPLEMENTATION OF INFORMED SEARCH ALGORITHM
(MEMORY-BOUNDED A*)
Date:
AIM:
To write a python program for implementation of A* algorithm.
ALGORITHM:
STEP 1: Start
STEP 2: Create a class Node to represent a state in the search. It has attributes like the state
itself, the parent node, the cost to reach the current state, and aheuristic value.
STEP 3: Implement the astar function that takes the start state, goal state, and a graph
representing the connections between states.
STEP 4: Create an empty priority queue to store nodes based on their cost plusheuristic values.
STEP 5: Push the initial node with the start state, no parent, cost of 0, andheuristic of 0
into the priority queue.
STEP 6: Create an empty set to keep track of visited states.
STEP 7: Repeat until the priority queue is empty.
STEP 8: If the current node's state has already been visited, skip to the next iteration.
STEP 9: Add the current node's state to the visited set.
STEP 10: If no path is found after exhausting all possible nodes, return None.
STEP 11: Define the graph representation, specifying the connections between states and the
associated costs.
STEP 12: Define the start and goal states.
STEP 13: Call the astar function with the start, goal, and graph parameters and store the
result.
STEP 14: Print the result.
PROGRAM:
import heapq
class Node:
def init (self, state, parent, cost, heuristic):
self.state = state
self.parent = parent
self.cost = cost
self.heuristic = heuristic
}
start = 'A'goal
= 'E'
result = astar(start, goal, graph)print(result)
OUTPUT:
['A','B','C','E']
RESULT:
Thus the program of A* algorithm implementation has been executed successfully and the
output was verified successfully.
Ex.No:3
IMPLEMENT NAÏVE BAYES MODEL
Date:
AIM:
To write a python program to implement Naïve Bayes model.
ALGORITHM:
STEP 1: Load the libraries: import the required libraries such as pandas, numpy,and sklearn.
STEP 2: Load the data into a pandas dataframe.
STEP 3: Clean and preprocess the data as necessary. For example, you can handle missing values,
convert categorical variables into numerical variables, and normalize the data.
STEP 4: Split the data into training and test sets using the train_test_split function from scikit-
learn.
STEP 5: Train the Gaussian Naive Bayes model using the training data.
STEP 6: Evaluate the performance of the model using the test data and the accuracy_score
function from scikit-learn.
STEP 7: Finally, you can use the trained model to make predictions on new data.
PROGRAM:
import numpy as np
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_splitfrom
sklearn.datasets import make_classification from
sklearn.naive_bayes import GaussianNB
# Generate random data
x,y=make_classification(n_samples=50,n_features=2,n_informative=2,n_redund
ant=0,n_clusters_per_class=1,random_state=25)
# Split data into training and test sets
x_train,x_test=x[:20],x[20:]
y_train,y_test=y[:20],y[20:]
# Train Naive Bayes classifierclf
= GaussianNB() clf.fit(x_train,
y_train)
# Predict labels for test datay_pred =
clf.predict(x_test)
# Calculate accuracy of the classifier
accuracy = clf.score(x_test, y_test)
print("Accuracy: ", accuracy)
# Plot the data and decision boundary
x_min, x_max = x[:, 0].min() - 1, x[:, 0].max() + 1 y_min,
y_max = x[:, 1].min() - 1, x[:, 1].max() + 1
xx, yy = np.meshgrid(np.arange(x_min, x_max, 0.02), np.arange(y_min, y_max,0.02)) Z =
clf.predict(np.c_[xx.ravel(), yy.ravel()])Z =
Z.reshape(xx.shape)
plt.scatter(x[:, 0], x[:, 1], c=y, cmap=plt.cm.coolwarm)
plt.xlabel('Feature 1')
plt.ylabel('Feature 2') plt.title('Naive
Bayes Classification')plt.show()
OUTPUT:
Accuracy:1.0
DIAGRAMATIC PROJECTION
RESULT:
Thus the Python program for implementing Naïve Bayes model was developed and the
output was verified successfully
Ex.No:4
IMPLEMENT BAYESIAN NETWORKS
Date:
AIM:
To construct a Bayesian network, to demonstrate the diagnosis of heart patients using
standard Heart Disease Data Set.
ALGORITHM:
STEP 1: Start
STEP 2: Read the training dataset T.
STEP 3: Calculate the mean and standard deviation of the predictor variables ineach class.
STEP 4: Repeat calculate the probability of fi using the gauss density equation ineach class.
Until the probability of all predictor variables (f1,f2,f3,..,fn)has been calculated.
STEP 5: Calculate the likelihood for each class.
STEP 6: Get the greatest likelihood.
STEP 7: Stop
PROGRAM:
import bayespy as bp
import numpy as np
import csv
from colorama import init
from colorama import Fore, Back, Styleinit()
ageEnum = {'SuperSeniorCitizen':0, 'SeniorCitizen':1, 'MiddleAged':2, 'Youth':3,'Teen':4)
genderEnum = {'Male':0, 'Female':1}
familyHistoryEnum = {'Yes':0, 'No':1}
dietEnum = {'High':0, 'Medium':1, 'Low':2}
lifeStyleEnum = {'Athlete': 0, 'Active':1, 'Moderate':2, 'Sedetary':3}cholesterolEnum
= {'High':0, 'BorderLine':1, 'Normal':2} heartDiseaseEnum = {'Yes':0, 'No':1} with
open('heart disease_data.csv') as csvfile:lines =
csv.reader(csvfile)
dataset = list(lines)data
= []
for x in dataset:
data.append([ageEnum[x[0]],genderEnum(x[1]], familyHistoryEnum[x[2]],dietEnum[x [3]],
lifeStyleEnum[x[4]],cholesterolEnum[x[5]],heart DiseaseEnum[x[6]]])
data= np.array(data)
N=len(data)
p_age=bp.nodes.Dirichlet (1.0*np.ones(5)) age =
bp.nodes.Categorical(p_age, plates=(N,))
age.observe(data[:0])
P_gender = bp.nodes.Dirichlet (1.0* np.ones(2)) gender =
bp.nodes.Categorical(p_gender, plates=(N,))
gender.observe(data[:,1])
p_familyhistory = bp.nodes.Dirichlet (1.0*np.ones(2)) familyhistory =
bp.nodes.Categorical(p_familyhistory, plates=(N,))
familyhistory.observe(data[:,2])
p_diet = bp.nodes.Dirichlet(1.0*np.ones(3)) diet =
bp.nodes.Categorical(p_diet, plates=(N,))
diet.observe(data[:,3])
p_lifestyle = bp.nodes.Dirichlet (1.0* np.ones(4)) lifestyle =
bp.nodes.Categorical(p_lifestyle, plates=(N,))
lifestyle.observe(data[:,4])
p_cholesterol = bp.nodes.Dirichlet (1.0* np.ones(3)) cholesterol =
bp.nodes.Categorical(p_cholesterol, plates=(N,))
cholesterol.observe(data[:,5])
p_heartdisease = bp.nodes.Dirichlet(np.ones(2), plates (5, 2, 2, 3, 4, 3))
heartdisease = bp.nodes.MultiMixture([age, gender, familyhistory, diet, lifestyle,cholesterol],
bp.nodes.Categorical, p_heartdisease)
heartdisease.observe(data[:,6])
p_heartdisease.update()
m=0
while m == 0:
print("\n")
res = bp.nodes.MultiMixture([int(input("Enter Age: ' + str(ageEnum))),
int(input('EnterGender: ' + str(genderEnum))),
int(input("Enter FamilyHistory: ' + str(familyHistoryEnum))), int(input('Enter
dietEnum: ' + str(dietEnum))),
int(input('Enter LifeStyle: ' + str(lifeStyleEnum))), int(input('Enter Cholesterol:'
+str(cholesterolEnum)))],
bp.nodes.Categorical,p_heartdisease).get_moments()[0][heartDiseaseEnum['Yes']]
print("Probability(Heart Disease)=" + str(res)) m =
int(input("Enter for Continue:0, Exit :1 ")
OUTPUT:
Enter
Age:{‘SuperSeniorCitizen’:0,’SeniorCitizen’:1,’MiddleAged’:2,’Youth’:3,’Teen’:4}1
Enter Gender:{‘Male’:0,’Female’:1}0
Enter FamilyHistory:{‘Yes’:0,’No’:1}0
Enter dietEnum:{‘High’:0,’Medium’:1,’Low’:2}2
Enter LifeStyle:{‘Athlete’:0,’Active’:1,’Moderate’:2,’Sedetary’:3}2Enter
Cholesterol:{‘High’:0,’BorderLine’:1,’Normal’:2}1
Probability(HeartDisease)=0.5
Enter for Continue:0,Exit:1 1
RESULT:
AIM:
To build regression models such as locally weighted linear regression and plot
the necessary graphs.
ALGORITHM:
STEP 1: Start
STEP 2: import the packages for the linear regression
STEP 3: Read the Datasets in CSV file
STEP 4: Select the independent and dependent variable X and Y.
STEP 5: Set X, Y labels and title
STEP 6: plot the dots using function ‘scatter()’
STEP 7: Determine the weight matrix using:W X, Xo)=e^(-(X-Xo)^2/2τ^2)
STEP 8: Determine the value of model term parameterβΒ(Xo)= (X^T W X)^-1 X ^ T
Wy
STEP 9: Perdition = Xo * β
STEP 10: Stop.
PROGRAM:
import numpy as np
import pandas as pd
import matplotlib.pyplot as pltimport
statsmodels.api as sm
data = pd.read_csv('Simple linear regression.csv')
data.describe() y =
data["GPA"] x1 =
data["SAT"]
plt.scatter(x1,y)
plt.xlabel("GPA", fontsize = 20)
plt.ylabel("SAT", fontsize = 20)
plt.title(“Regression model”)
x = sm.add_constant(x1)
results = sm.OLS(y,x).fit()
results.summary()
plt.scatter(x1,y)
yhat = 0.0017*x1 + 0.275
fig = plt.plot(x1,yhat, lw=4, c="green", label = "regression line")plt.xlabel("gpa",
fontsize = 20)
plt.ylabel("sat", fontsize = 20)
plt.show()
OUTPUT:
Data set for GPA and SAT
GPA SAT
0.1 110
0.11 111
0.3 112
0.23 113
0.28 565
0.34 567
0.44 976
0.5 756
0.6 788
0.66 123
0.7 114
…… …...
…...
1.04 123
1.3 154
1.34 124
DIAGRAMATIC PROJECTION
RESULT:
Thus the above given program was executed and verified successfully.
Ex.No:6(a) BUILD DECISION TREES AND RANDOM FORESTS
(DECISION TREES)
Date:
AIM:
To implement the concept of decision tree with suitable dataset in python.
ALGORITHM:
STEP 1: Start
STEP 2: Import necessary libraries: numpy, matplotlib, seaborn,
pandas,train_test_split, LabelEncoder, DecisionTreeClassifier, plot_tree, and Random
Forest Classifier.
STEP 3: Read the data from 'flowers.csv' into a pandas Data Frame.
STEP 4: Extract the features into an array X, and the target variable into an array.
STEP 5: Encode the target variable using the Label Encoder.
STEP 6: Split the data into training and testing sets using train_test_splitfunction.
STEP 7: Create a Decision Tree Classifier object, fit the model to the training data,and
visualize the decision tree using plot_tree.
STEP 8: Create a Random Forest Classifier object with 100 estimators, fit themodel to
the training data, and visualize the random forest by displaying 6 trees.
STEP 9: Print the accuracy of the decision tree and random forest models using the
score method on the test data.
PROGRAM:
import matplotlib.pyplot as pltimport
pandas as pd
import numpy as np
import scipy as sp
import numpy as np
import pandas as pd
from sklearn.metrics import confusion_matrix
from sklearn.model_selection import train_test_splitfrom
sklearn.tree import DecisionTreeClassifier
from sklearn.metrics import accuracy_score from
sklearn.metrics import classification_report
# Performing training
clf_gini.fit(X_train, y_train)
return clf_gini
# Function to perform training with entropy.
def tarin_using_entropy(X_train, X_test, y_train):
# Performing training
clf_entropy.fit(X_train, y_train)
return clf_entropy
print("Report : ",
classification_report(y_test, y_pred))
# Driver code
def main():
RESULT:
Thus the program to implement the concept of decision tree with suitable dataset has
been executed successfully.
Ex.No:6(b) BUILD DECISION TREES AND RANDOM FORESTS
(RANDOM FORESTS)
Date:
AIM:
To implement the concept of random forests with suitable dataset in python.
ALGORITHM:
STEP 1: Start
STEP 2: Import necessary libraries: numpy, matplotlib, seaborn, pandas,train_test_split,
LabelEncoder, DecisionTreeClassifier, plot_tree, and Random Forest Classifier.
STEP 3: Read the data from 'flowers.csv' into a pandas Data Frame.
STEP 4: Extract the features into an array X, and the target variable into an array.
STEP 5: Encode the target variable using the Label Encoder.
STEP 6: Split the data into training and testing sets using train_test_splitfunction.
STEP 7: Create a DecisionTreeClassifier object, fit the model to the training data,and visualize
the decision tree using plot_tree.
STEP 8: Create a RandomForestClassifier object with 100 estimators, fit themodel to the
training data, and visualize the random forest by displaying 6 trees.
STEP 9: Print the accuracy of the decision tree and random forest models usingthe score
method on the test data.
PROGRAM:
[0110111000101011100010111111010000101
1111111101100001100010001011001111111 01100010110001001]
RESULT:
Thus the program to implement the concept of random forest with suitable dataset
has been executed successfully.
Ex.No:7
BUILD SVM MODELS
Date:
AIM:
To write a Python program to build SVM model.
ALGORITHM:
STEP 1: Import the necessary libraries (matplotlib.pyplot, numpy, and svm from sklearn).
STEP 2: Define the features (X) and labels (y) for the fruit dataset.
STEP 3: Create an SVM classifier with a linear kernel using svm.SVC(kernel='linear').
STEP 4: Train the classifier on the fruit data using clf.fit(X, y).
STEP 5: Plot the fruits and decision boundary using plt.scaƩer(X[:, 0], X[:, 1], c=colors), where
colors is a list of colors assigned to each fruit based on its label.
STEP 6: Create a meshgrid to evaluate the decision function using np.meshgrid(np.linspace(xlim[0],
xlim[1], 100), np.linspace(ylim[0], ylim[1], 100)).
STEP 7: Use the decision function to create a contour plot of the decision boundary and margins
using ax.contour(xx, yy, Z, colors='k', levels=[-1, 0, 1], alpha=0.5, linestyles=['--', '-', '--']).
STEP 8: Show the plot using plt.show().
PROGRAM:
OUTPUT:
RESULT:
Thus, the Python program to build an SVM model was developed, and the output was successfully
verified.
Ex.No:8
IMPLEMENT ENSEMBLING TECHNIQUES
Date:
AIM:
To implement the ensembling technique of Blending with the given Alcohol QCM Dataset.
ALGORIHTM:
STEP 1: Split the training dataset into train, test and validation dataset.
STEP 2: Fit all the base models using train dataset.
STEP 3: Make predictions on validation and test dataset.
STEP 4: These predictions are used as features to build a second level model
STEP 5: This model is used to make predictions on test and meta features.
PROGRAM:
import pandas as pd
from sklearn.model_selection import train_test_split
from sklearn.metrics import log_loss
# importing machine learning models for prediction
from sklearn.ensemble
import RandomForestClassifier
from xgboost import XGBClassifier
from sklearn.linear_model import LogisticRegression
# importing voting classifier
from sklearn.ensemble import VotingClassifier
# loading train data set in dataframe from train_data.csv file
df = pd.read_csv("train_data.csv")
# geƫting target data from the dataframe
target = df["Weekday"]
# geƫting train data from the dataframe
train = df.drop("Weekday")
# Spliƫng between train data into training and validation dataset
X_train, X_test, y_train, y_test = train_test_split( train, target, test_size=0.20)
# initializing all the model objects with default parameters
model_1 = LogisticRegression()
model_2 = XGBClassifier()
model_3 = RandomForestClassifier()
# Making the final model using voting classifier
final_model = VotingClassifier( estimators=[('lr', model_1), ('xgb', model_2), ('rf', model_3)],
voting='hard')
# training all the model on the train dataset
final_model.fit(X_train, y_train)
# predicting the output on the test dataset
pred_final = final_model.predict(X_test)
# printing log loss between actual and predicted value
print(log_loss(y_test, pred_final))
OUTPUT:
231
RESULT:
Thus the program to implement ensembling technique of Blending with the given Alcohol
QCM Dataset have been executed successfully and the output got verified.
Ex.No:9
IMPLEMENT CLUSTERING ALGORITHMS
Date:
AIM:
To implment k-Nearest Neighbour algorithm to classify the Iris Dataset.
ALGORITHM:
PROGRAM:
accuracy is
accuracy 0.97 30
RESULT:
Thus the program to implement k-Nearest Neighbour Algorithm for clustering Iris Dataset have
been executed successfully and output got verified.
Ex.No:10
IMPLEMENT EM OR BAYESIAN NETWORKS
Date:
AIM:
ALGORITHM:
STEP 1: IMPLEMENT CLUSTERING ALGORITHM Start with a EM Bayesian network using the
iris dataset
STEP 2: Frame the iris dataset with the Sepal_Length,Sepal_Width,Petal_length,Petal_width
STEP 3: E-step: Compute the expected sufficient statistics for the latent variables. This involves
computing the posterior probability distribution over the latent variables given the observed data and
the current parameter estimates. This can be done using the forward-backward algorithm or the belief
propagation algorithm.
Compute q(h)=P(H=h|E=e; 𝜃)for each h(probabilistic inference)
STEP 4: M-step: Update the parameter estimates using the expected sufficient statistics computed in
step 3. This involves maximizing the likelihood of the data with respect to the parameters of the
network, given the expected sufficient statistics.
STEP 5: Repeat steps 3-4 until convergence. Convergence can be measured by monitoring the
change in the log-likelihood of the data, or by monitoring the change in the parameter estimates.
PROGRAM:
OUTPUT:
Sepal_L Sepal_ Petal_l Petal_ Spe
id ength Width ength width cies
Iris-
seto
1 5.1 3.5 1.4 0.2 sa
Iris-
seto
2 4.9 3 1.4 0.2 sa
Iris-
seto
3 4.7 3.2 1.3 0.2 sa
DIAGRAMATIC PROJECTION
RESULT:
Thus the python program to implement EM for Bayesian Networks hasbeen executed and
verified Successfully.
Ex.No:11
BUILD SIMPLE NN MODELS
Date:
AIM:
ALGORITHM:
STEP 1: Start
STEP 2: Choose the number of layers and neurons in each layer. This depends onthe problem you
are trying to solve.
STEP 3: Define the activation function for each layer. Common choices are ReLU,sigmoid, and
tanh.
STEP 4: Initialize the weights and biases for each neuron in the network. This canbe done
randomly or using a pre-trained model.
STEP 5: Define the loss function and optimizer to be used during training. Theloss function
measures how well the model is doing, while the optimizer updates the weights and biases to
minimize the loss.
STEP 6: Train the model on the input data using the defined loss function and optimizer. This
involves forward propagation to compute the output of the model, and backpropagation to
compute the gradients of the loss with respectto the weights and biases. The optimizer then
updates the weights and biases based on the gradients.
STEP 7: Evaluate the performance of the model on new data using metrics suchas accuracy,
precision, recall, and F1 score.
STEP 8: Stop
PROGRAM:
import numpy as np
from keras.models import Sequential
from keras.layers import Dense
# Define the input and output data
X = np.array([[0, 0], [0, 1], [1, 0], [1, 1]])
y = np.array([[0], [1], [1], [0]])
# Define the model
model = Sequential()
model.add(Dense(4, input_dim=2, activation='relu'))
model.add(Dense(1, activation='sigmoid'))
# Compile the model
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
# Fit the model to the data
model.fit(X, y, epochs=2, batch_size=4)
# Evaluate the model on new data
test_data = np.array([[0, 0], [0, 1], [1, 0], [1, 1]])
predictions = model.predict(test_data) print(predictions)
OUTPUT:
[[0.49927324]
[0.45714095]
[0.5460681]
[0.60924554]]
RESULT:
Thus the Python program to build simple NN Models was developed successfully.
Ex.No:12
BUILD DEEP LEARNING NN MODELS
Date:
AIM:
To implement and build a Convolutional neural network model which predicts the age and
gender of a person using the given pre-trained models.
ALGORITHM:
STEP 1: Import the necessary libraries, such as numpy and keras.
STEP 2: Load or generate your dataset. This can be done using numpy or any other data
manipulation library.
STEP 3: Preprocess your data by performing any necessary normalization,
scaling, or other transformations.
STEP 4: Define your neural network architecture using the Keras Sequential API. Add
layers to the model using the add() method, specifying the number of units, activation
function, and input dimensions for each layer.
STEP 5: Compile your model using the compile() method. Specify the loss function,
optimizer, and any evaluation metrics you want to use.
STEP 6: Train your model using the fit() method. Specify the training data, validationdata,
batch size, and number of epochs.
STEP 7: Evaluate your model using the evaluate() method. This will give you the loss and
accuracy metrics on the test set.
STEP 8: Use your trained model to make predictions on new data usingthe predict() method.
PROGRAM:
OUTPUT:
RESULT:
Thus the program to implement and build a Convolutional neural network model which
predicts the age and gender of a person using the given pre-trained models have been executed
successfully and the output got verified.