0% found this document useful (0 votes)
18 views48 pages

Artificial Intellegence Lab Practical

Uploaded by

Anu sri
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
18 views48 pages

Artificial Intellegence Lab Practical

Uploaded by

Anu sri
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
You are on page 1/ 48

VIVEKANANDHA

COLLEGE OF ARTS AND SCIENCES FOR WOMEN


(AUTONOMOUS)
(An ISO 9001:2008 Certified Institution;
Affiliated to Periyar University, Salem, Approved by AICTE,
Re-accredited With “A+” Grade by NAAC, Recognized u/s 12(B),2(f) of UGC Act,1956)
Elayampalayam, Tiruchengode-637 205.

BACHELOR OF SCIENCE IN CYBER SECURITY


PRACTICAL RECORD

NAME :

REG. NO :

SECOND YEAR (SEMESTER – III) 2024 -


2025

ARTIFICIAL INTELLEGENCE WITH


MACHINE LEARNING LAB
(24U3CYCP03)
VIVEKANANDHA
COLLEGE OF ARTS AND SCIENCES FOR
WOMEN (AUTONOMOUS)
An ISO 9001:2008 Certified Institution
(Affiliated to Periyar University-Salem, Approved by AICTE,
Re-accredited With “A+” Grade by NAAC,Recognized u/s 12(B), 2(f) of UGC Act,1956)
Elayampalayam , Tiruchengode-637205.

BACHELOR OF SCIENCE IN CYBER SECURITY

Certified that this is a bonafide record of practical work done by

Ms/Mrs Reg.No: in the

ARTIFICIAL INTELLEGENCE WITH MACHINE LEARNING LAB (24U3CYCP03)

at the Vivekanandha College of Arts and Sciences for Women (AUTONOMOUS),

Elayampalayam, Tiruchengode.

Staff In-Charge Head of the Department

Submitted for the University Practical Examinations held on at PG

and Research Department of Computer Science and Applications, Vivekanandha College of

Arts and Sciences for Women (AUTONOMOUS) Elayampalayam, Tiruchengode.

Internal Examiner External Examiner


INDEX

S.NO DATE CONTENTS PAGE SIGN


NO.

1 Familarizing Anaconda and Jupyter

2 K-Means Clustering

3 Decision Trees

4 K-Nearest Neighbour

5 EM and K-Nearest Neighbour

6 Baseyian Network

7 Naive Baseyian Model

8 Simple Linear Regression

9 Artificial Neural Network

10 Weka tool for SVM Classification


Date :
Exercise No :

1. Familarizing Anaconda And Jupyter To Importing Modules And


Dependencies For Machine Learning.

AIM:

To Familarizing Anaconda And Jupyter To Importing Modules And


Dependencies For Machine Learning.

ALGORITHM:

1.Install Anaconda

2.Set Up Virtual Environment.

3.Install Jupyter Notebook

4.Install Machine Learning Libraries

5.Launch Jupyter Notebook

6.Import Necessary Libraries in Jupyter Notebook


STEPS:

Step 1: Install Anaconda

1. Download Anaconda
o Go to the Anaconda website.
o Choose the appropriate installer for your operating system
(Windows, macOS, or Linux).
o Click the "Download" button.

2. Follow Installation Instructions

o Once the installer is downloaded, open it.


o Follow the on-screen instructions to install Anaconda.
 For Windows: Run the .exe installer and follow the setup steps.
 For macOS: Open the downloaded .pkg file and follow the
setup steps.
 For Linux: Run the installation script in the terminal.

Step 2: Set Up a Virtual Environment

1. Open Anaconda Navigator


o After installation, open Anaconda Navigator from your applications
menu.
2. Create a New Environment

o In Anaconda Navigator, navigate to the “Environments” tab on the


left-hand side.
o Click on the “Create” button to open the environment creation
dialog.
o Name your environment (e.g., ml_env).
o Select the Python version you want to use (usually the latest version
is recommended).
o Click the “Create” button to set up the environment.
Step 3: Install Jupyter Notebook

1. Activate Your Environment


o Open the Anaconda Prompt (Windows) or Terminal (macOS/Linux).
o Type the following command to activate your environment:

conda activate ml_env

2. Install Jupyter Notebook


o With your environment activated, install Jupyter Notebook by typing:

conda install jupyter

Step 4: Install Machine Learning Libraries


1. Install Common Libraries
o In the activated environment, install the necessary libraries for
machine learning:

o conda install numpy pandas scikit-learn matplotlib

Step 5: Launch Jupyter Notebook

1. Start Jupyter Notebook


o With your environment still activated, start Jupyter Notebook by
typing:

jupyter notebook
o This command will open the Jupyter Notebook interface in your
default web browser.
2. Create a New Notebook

o In the Jupyter Notebook interface, click the “New” button on the


right and select “Python 3”.
o This will open new notebook where you can write and execute code.
Step 6: Importing Modules in Jupyter Notebook
1. Import the Necessary Modules
o In your new Jupyter Notebook, import the necessary libraries by
writing the following code in a cell and running it:

import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LinearRegression

RESULT:

Thus the steps to To Familarizing Anaconda And Jupyter To Importing Modules


And Dependencies For Machine Learning has been successfully executed.
Date :
Exercise No :

2.Given the following data, which specify classifications for nine


combinations of VAR1 and VAR2 predict a classification for a
case where VAR1=0.906 and VAR2=0.606, using the result of kmeans
c clustering with 3 means (i.e., 3 centroids) periments.

VAR1 VAR2 CLASS


1.713 1.586 0
0.180 1.786 1
0.353 1.240 1
0.940 0.566 0
0.486 0.759 1
1.266 1.106 0
1.540 0.419 1
0.459 1.799 1
0.773 0.186 1

AIM:

To predict a classification using K-Means Clustering algorihtm

ALGORITHM:
1. Import Required Libraries:

 Import necessary libraries like numpy and sklearn.

2. Define the Dataset:

 Create the feature matrix X.


 Optionally, create the label array y (though it's not used for fitting in
KMeans).
3. Fit the KMeans Model:

 Initialize the KMeans model with the desired number of clusters.


 Fit the model using the feature matrix X.

4. Display the Input Data:

 Print the input data for reference.

5. Get Test Data from User:

 Prompt the user to input new data points for prediction.

6. Predict the Cluster for Test Data:

 Use the fitted model to predict the cluster for the new data point.
 Display the predicted cluster.

7.End the Program


CODING:

from sklearn.cluster import KMeans


import numpy as np
X = np.array([[1.713,1.586], [0.180,1.786], [0.353,1.240],
[0.940,1.566], [1.486,0.759], [1.266,1.106],[1.540,0.419],[0.459,1.799],
[0.773,0.186]])
y=np.array([0,1,1,0,1,0,1,1,1])
kmeans = KMeans(n_clusters=3, random_state=0).fit(X,y)
print("The input data is ")
print("VAR1 \t VAR2 \t CLASS")
i=0
for val in X:
print(val[0],"\t",val[1],"\t",y[i])
i+=1
print("="*20)
# To get test data from the user
print("The Test data to predict ")
test_data = []
VAR1 = float(input("Enter Value for VAR1 :"))
VAR2 = float(input("Enter Value for VAR2 :"))
test_data.append(VAR1)
test_data.append(VAR2)
print("="*20)
OUTPUT:

The input data is


VAR1 VAR2 CLASS
1.713 1.586 0
0.18 1.786 1
0.353 1.24 1
0.94 1.566 0
1.486 0.759 1
1.266 1.106 0
1.54 0.419 1
0.459 1.799 1
0.773 0.186 1
====================
The Test data to predict
Enter Value for VAR1 :0.906
Enter Value for VAR2 :0.606
====================
The predicted Class is : [0]

RESULT:

Thus the predict a classification using K-Means Clustering algorithm has been
successfully executed
Date : :
Exercise No :
3. Write a program to demonstrate the working of the decision tree
based ID3 algorithm. Use an appropriate data set for building the
decision tree and apply this knowledge to classify a new sample.

AIM:
To Write a program to demonstrate the working of the decision tree based
ID3 algorithm and apply this knowledge to classify a new sample.

ALGORITHM:

1. Load the Data:

 Use the Iris dataset, which is commonly used for classification tasks.

2. Prepare the Data:

 Convert the dataset into a Pandas DataFrame for easier manipulation.


 Split the data into training and testing sets.

3. Train the Model:

 Use DecisionTreeClassifier with criterion='entropy'


to build the decision tree (which uses information gain, similar to ID3).

4. Print the Tree Rules:

 Display the rules of the decision tree using export_text.

5. Plot the Tree:

 Visualize the decision tree using plot_tree.

6. Classify New Samples:

 Predict the class of a new sample and display the result.

7. Evaluate the Model:

 Calculate and display the accuracy of the model on the test set.
CODING:

import numpy as np
import pandas as pd
from sklearn.tree import DecisionTreeClassifier, export_text, plot_tree
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split

# Sample dataset: Iris dataset (common for classification tasks)


from sklearn.datasets import load_iris
data = load_iris()
X = data.data
y = data.target
feature_names = data.feature_names
target_names = data.target_names

# Create a DataFrame
df = pd.DataFrame(data.data, columns=feature_names)
df['species'] = pd.Categorical.from_codes(data.target, target_names)

# Split the dataset into training and testing sets


X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2,
random_state=42)

# Create and train the decision tree classifier


clf = DecisionTreeClassifier(criterion='entropy', random_state=42)
clf.fit(X_train, y_train)

# Print the decision tree


tree_rules = export_text(clf, feature_names=feature_names)
# Plot the decision tree
plt.figure(figsize=(20, 10))
plot_tree(clf, feature_names=feature_names, class_names=target_names,
filled=True)
plt.title('Decision Tree')

plt.show()
# Classify a new sample
new_sample = np.array([[5.1, 3.5, 1.4, 0.2]]) # Example: A new iris sample
predicted_class = clf.predict(new_sample)
predicted_class_name = target_names[predicted_class[0]]
print(f"\nNew Sample Prediction: {predicted_class_name}")

# Test accuracy
accuracy = clf.score(X_test, y_test)
print(f"\nModel Accuracy: {accuracy:.2f}")
OUTPUT:
New Sample Prediction: setosa
Model Accuracy: 1.00

RESULT:

Thus the To Write a program to demonstrate the working of the decision tree
based ID3 algorithm and apply this knowledge to classify a new sample has been
executed successfully.
Date :
Exercise No :

4.Write a Program to implement K-Nearest Algoritm


to classify iris data set.Print both correct and wrong
predictions .

AIM:

To Write a Program to implement K-Nearest Algorithm to


classify iris dataset.Print both correct and wrong predictions

ALGORITHM:

1. Import Required Libraries:


o Import necessary libraries like sklearn and numpy.
2. Load the Iris Dataset:

o Load the Iris dataset using load_iris from sklearn.datasets.


3. Split the Dataset:

o Split the dataset into training and testing sets using train_test_split.
4. Create the k-NN Classifier:

o Initialize the k-NN classifier with a specified number of neighbors (k=3 in


this case).
5. Train the Classifier:

oTrain the classifier using the training data.


6. Make Predictions on Test Data:

o Use the trained classifier to make predictions on the test data.


7. Evaluate Predictions:

o Calculate the accuracy of the predictions.


o Count the number of correct and wrong predictions.
8. Display Results:

o Print the number of correct and wrong predictions.


o Print the accuracy of the classifier.
CODING:

from sklearn.datasets import load_iris


from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score
from sklearn.neighbors import KNeighborsClassifier

# Load the Iris dataset


iris = load_iris()
X = iris.data
y = iris.target

# Split the dataset into training and testing sets


X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2,
random_state=42)

# Create a k-NN classifier with k=3


knn = KNeighborsClassifier(n_neighbors=3)

# Train the classifier


knn.fit(X_train, y_train)

# Make predictions on the test data


predictions = knn.predict(X_test)

# Evaluate predictions
accuracy = accuracy_score(y_test, predictions)
correct = sum(predictions == y_test)
wrong = len(y_test) - correct

print(f'Correct predictions: {correct}/{len(y_test)}')


print(f'Wrong predictions: {wrong}/{len(y_test)}')
print(f'Accuracy: {accuracy:.2f}')
OUTPUT:
Correct predictions: 30/30
Wrong predictions: 0/30
Accuracy: 1.00

RESULT:

Thus the program to demonstrate the the K-Nearest Neighbour algorithm to


classify irish data set has been sucessfully executed
Date :
Exercise No :

5. Apply EM algorithm to cluster a set of data stored in a .CSV file. Use


the same data set for clustering using k-Means algorithm. Compare the
results of these two algorithms and comment on the quality of
clustering. You can add Java/Python ML library classes/API in the
program.

AIM:

To write a Program to Apply EM algorithm and K-Means algorithm to cluster a


set of data stored in a .CSV file and compare results of two algorithms.

ALGORITHM:

1. Import Required Libraries:

 Import necessary libraries such as pandas, numpy,


StandardScaler, GaussianMixture, KMeans, and
silhouette_score.

2. Load and Preprocess the Dataset:

 Load the dataset from a CSV file.


 Drop any rows with missing values.
 Extract the features from the dataset.

3. Standardize the Features:

 Use StandardScaler to standardize the feature values.

4. Fit the Gaussian Mixture Model (GMM):

 Initialize the Gaussian Mixture Model with a specified number of


components.
 Fit the GMM to the standardized features.
 Predict the labels using the fitted GMM.
 Calculate the silhouette score for the GMM clustering results.
5. Fit the K-Means Model:

 Initialize the K-Means model with a specified number of clusters.


 Fit the K-Means model to the standardized features.
 Predict the labels using the fitted K-Means model.
 Calculate the silhouette score for the K-Means clustering results.

6. Compare Clustering Results:

 Print the clustering labels from both the GMM and K-Means models.
 Compare the silhouette scores to determine which algorithm provides
better clustering results.
CODING:

import pandas as pd
import numpy as np
from sklearn.preprocessing import StandardScaler
from sklearn.mixture import GaussianMixture
from sklearn.cluster import KMeans
from sklearn.metrics import silhouette_score

data = pd.read_csv('your_dataset.csv')
data.dropna(inplace=True)

#features
X = data.iloc[:, :-1].values

scaler = StandardScaler()
X_scaled = scaler.fit_transform(X)

gmm = GaussianMixture(n_components=3, random_state=42)

gmm.fit(X_scaled)

gmm_labels = gmm.predict(X_scaled)

silhouette_gmm = silhouette_score(X_scaled, gmm_labels)


print(f"Silhouette Score for EM clustering: {silhouette_gmm:.2f}")

kmeans = KMeans(n_clusters=3, random_state=42)

kmeans.fit(X_scaled)
kmeans_labels = kmeans.labels_

silhouette_kmeans = silhouette_score(X_scaled, kmeans_labels)


print(f"Silhouette Score for K-Means clustering: {silhouette_kmeans:.2f}")
print("\nComparison of Clustering Results:")
print("---------------------------------")
print("EM Algorithm Clustering Labels:")
print(gmm_labels)
print("K-Means Clustering Labels:")
print(kmeans_labels)

# Assess clustering quality based on silhouette scores


if silhouette_gmm > silhouette_kmeans:
print("\nEM Algorithm provides better clustering results.")
elif silhouette_kmeans > silhouette_gmm:
print("\nK-Means Algorithm provides better clustering results.")
else:
print("\nBoth algorithms provide similar clustering results.")
OUTPUT:
Silhouette Score for EM clustering: 0.26
Silhouette Score for K-Means clustering: 0.26

Comparison of Clustering Results:


---------------------------------
EM Algorithm Clustering Labels:
[2 1 1 1 2 0 2 2 1 1 2 2 1 1 2]
K-Means Clustering Labels:
[2 1 1 1 2 0 2 2 1 1 2 2 1 1 2]

Both algorithms provide similar clustering results.

RESULT:
Thus the program Apply EM algorithm and K-Means algorithm to cluster a set of
data stored in a .CSV file and compare results of two algorithms sucessfully
executed
Date :
Exercise No :
6. Write a program to construct a Bayesian network considering
medical data. Use this model to demonstrate the diagnosis of heart
patients using standard Heart Disease Data Set. You can use
Java/Python ML libraryclasses/APL.

AIM:
To Write a program to construct a Bayesian network considering medical data
to diagnose heart disease prediction
ALGORITHM:

1. Import Libraries:

 pandas for data manipulation.


 numpy for numerical operations.
 warnings to suppress warnings.
 LabelEncoder for encoding categorical variables.
 train_test_split for splitting the dataset into training and testing sets.
 GaussianNB for the Naive Bayes classifier.
 accuracy_score for evaluating the model.

2. Load and Preprocess the Data:

 Read the dataset (hearts.csv).


 Encode categorical variables using LabelEncoder.

3.Define Features and Target:

 x contains the input features.


 y contains the target variable (HeartDisease).

4. Split the Data:

 Split the dataset into training (80%) and testing (20%) sets.
5. Train the Naive Bayes Model:

 Initialize and train the GaussianNB model.

6. Make Predictions and Evaluate the Model:

 Predict the target variable on the test set.


 Calculate and print the accuracy of the model.

7. Predict on New Data:

 Make a prediction for a new sample and print the result.


CODING:

import pandas as pd
import numpy as np
import warnings
warnings.filterwarnings('ignore')

df = pd.read_csv('hearts.csv') #to read the file

from sklearn.preprocessing import LabelEncoder


le = LabelEncoder()

df['Sex'] = le.fit_transform(df['Sex'])
df['ChestPainType'] = le.fit_transform(df['ChestPainType'])
df['RestingECG'] = le.fit_transform(df['RestingECG'])
df['ExerciseAngina'] = le.fit_transform(df['ExerciseAngina'])
df['ST_Slope'] = le.fit_transform(df['ST_Slope'])

x=df.drop(columns=['HeartDisease']) #Input

y=df['HeartDisease'] #Output

from sklearn.model_selection import train_test_split


x_train,x_test,y_train,y_test=train_test_split(x,y,test_size=0.2,random_stat
e=12) #split the val

#X_train - 80% input data


#Y_train - 80% output data
#X_test - 20% input data
#Y_test - 20% output data

from sklearn.naive_bayes import GaussianNB


NB = GaussianNB()
NB.fit(x_train, y_train)

y_pred=NB.predict(x_test)

from sklearn.metrics import accuracy_score


print('ACCURACY is', accuracy_score(y_test,y_pred))

testPrediction = NB.predict([[29,0,2,100,106,1,2,80,1,1,1]])
if testPrediction==1:
print("The Patient Have Heart Disease,please consult the Doctor")
else:
print("The Patient Normal")
OUTPUT:
ACCURACY is 0.8206521739130435
The Patient Have Heart Disease,please consult the Doctor

RESULT:

Thus the program to Write a program to construct a Bayesian network considering


medical data to diagnose heart disease prediction has been successfully executed.
Date :
Exercise No :
7. Assuming a set of documents that need to be classified, use the
naive Bayesian Classifier model to perform this task. Built-in Java
classes/API can be used to write the program. Calculate the accuracy,
precision, and recall for your data set.
AIM:
To write a program using Naive Bayesian algorithm to calculate
accuracy, precision and recall for our dataset.

ALGORITHM:
1. Data Preparation:

 Create a sample dataset with documents and their corresponding


categories.
 Use CountVectorizer to convert the text documents into feature vectors
(bag-of-words representation).

2.Train-Test Split:

 Split the dataset into training and testing sets using train_test_split.

3. Train the Model:

 Train a Multinomial Naive Bayes classifier using the training data.

4. Make Predictions and Evaluate:

 Make predictions on the test set.


 Calculate accuracy, precision, and recall to evaluate the model's
performance.
5. Predict for a New Sample:

 Transform a new sample document using the same CountVectorizer.


 Predict the category of the new sample and print the result.
CODING:

import numpy as np
import pandas as pd
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.model_selection import train_test_split
from sklearn.naive_bayes import MultinomialNB
from sklearn.metrics import accuracy_score, precision_score, recall_score

data = {
'document': [
'The quick brown fox jumps over the lazy dog',
'Never jump over the lazy dog quickly',
'Brown foxes are quick and jumping over dogs',
'A quick brown dog outpaces a quick fox',
'The lazy dog does not jump over the quick fox',
'Quick brown fox jumps over a lazy dog',
'A lazy dog does not jump quickly'
],
'category': [
'positive', 'negative', 'positive', 'positive', 'negative', 'positive',
'negative'
]
}
df = pd.DataFrame(data)
vectorizer = CountVectorizer()
X = vectorizer.fit_transform(df['document'])
y = df['category']

X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2,


random_state=42)

nb = MultinomialNB()
nb.fit(X_train, y_train)

y_pred = nb.predict(X_test)

accuracy = accuracy_score(y_test, y_pred)


precision = precision_score(y_test, y_pred, pos_label='positive')
recall = recall_score(y_test, y_pred, pos_label='positive')

print(f'Accuracy: {accuracy:.2f}')
print(f'Precision: {precision:.2f}')
print(f'Recall: {recall:.2f}')
test_document = 'Quick brown fox'
test_vector = vectorizer.transform([test_document])
test_prediction = nb.predict(test_vector)
if test_prediction == 'positive':
print("The document is classified as positive.")
else:
print("The document is classified as negative.")
OUTPUT:
Accuracy: 1.00
Precision: 1.00
Recall: 1.00
The document is classified as positive.

RESULT:

Thus the program using Naive Bayesian algorithm to calculate accuracy,


precision and recall for our dataset has been successfully executed.
Date :
Exercise No :
8.Write a python program to program to implement a
simple linear regression to plot a graph.

AIM:
To write java program to create package in java.

ALGORITHM:

1. Data Preparation:

 Create a sample dataset with two columns: X and Y.

2. Splitting the Data:

 Separate the dataset into features (X) and target variable (y).
 Split the dataset into training and testing sets using train_test_split.

3. Training the Model:

 Create an instance of LinearRegression.


 Train the model using the training data.

4. Making Predictions:

 Predict the target values for the test set.

5. Plotting:

 Use matplotlib to plot the data points.


 Plot the regression line using the trained model.

6. Output:

 Display the coefficient and intercept of the regression line.


 Print the predicted and actual values for the test set
CODING:

import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.linear_model import LinearRegression
from sklearn.model_selection import train_test_split

data = {
'X': [1, 2, 3, 4, 5, 6, 7, 8, 9, 10],
'Y': [3, 4, 2, 5, 6, 7, 8, 9, 10, 12]
}

df = pd.DataFrame(data)

X = df[['X']].values
y = df['Y'].values

X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2,


random_state=42)

model = LinearRegression()
model.fit(X_train, y_train)

y_pred = model.predict(X_test)
plt.scatter(X, y, color='blue', label='Data Points')

plt.plot(X, model.predict(X), color='red', linewidth=2, label='Regression


Line')

plt.xlabel('X')
plt.ylabel('Y')
plt.title('Simple Linear Regression')
plt.legend()
plt.show()

print(f"Coefficient: {model.coef_[0]}")
print(f"Intercept: {model.intercept_}")
for i in range(len(X_test)):
print(f"Predicted: {y_pred[i]}, Actual: {y_test[i]}")
OUTPUT:
Coefficient: 1.1272727272727274
Intercept: 1.490909090909091
Predicted: 5.0, Actual: 5
Predicted: 11.0, Actual: 12

RESULT:
Thus python program to program to implement a simple linear regression to plot
a graph has been successfully executed.
Date :
Exercise No :
9. Build an Artificial Neural Network by implementing the Back
propagation algorithm and test the same using appropriate data sets.

AIM:
To Build an Artificial Neural Network by implementing the Back propagation
algorithm and test the same using appropriate data sets.

ALGORITHM:
1. Data Preprocessing:
o Load and preprocess the Iris dataset. Standardize the features and
one-hot encode the labels.
2. Initialize Parameters:
o Define the architecture of the neural network: input layer, hidden
layer, and output layer.
o Initialize weights and biases randomly.
3. Training:
o Perform forward propagation to calculate outputs.
o Compute error and perform backpropagation to update weights
and biases.
o Print the loss every 100 epochs.
4. Testing:
o Use the trained network to make predictions on the test data and
calculate accuracy.
CODING:

import numpy as np
from sklearn.datasets import load_iris
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler, OneHotEncoder
from sklearn.metrics import accuracy_score

# Sigmoid activation function and its derivative


def sigmoid(x):
return 1 / (1 + np.exp(-x))

def sigmoid_derivative(x):
return sigmoid(x) * (1 - sigmoid(x))

# Load dataset and preprocess


data = load_iris()
X = data.data
y = data.target
encoder = OneHotEncoder(sparse_output=False)
y_onehot = encoder.fit_transform(y.reshape(-1, 1))
scaler = StandardScaler()
X_scaled = scaler.fit_transform(X)
X_train, X_test, y_train, y_test = train_test_split(X_scaled, y_onehot,
test_size=0.2, random_state=42)

# Initialize parameters
input_size = X_train.shape[1]
hidden_size = 4
output_size = y_train.shape[1]
learning_rate = 0.01
epochs = 500

# Randomly initialize weights and biases


np.random.seed(42)
weights_input_hidden = np.random.rand(input_size, hidden_size)
weights_hidden_output = np.random.rand(hidden_size, output_size)
bias_hidden = np.random.rand(hidden_size)
bias_output = np.random.rand(output_size)

# Training
for epoch in range(epochs):
# Forward pass
hidden_layer = sigmoid(np.dot(X_train, weights_input_hidden) +
bias_hidden)
output_layer = sigmoid(np.dot(hidden_layer, weights_hidden_output) +
bias_output)

# Compute error
error = y_train - output_layer
d_output = error * sigmoid_derivative(output_layer)
d_hidden = d_output.dot(weights_hidden_output.T) *
sigmoid_derivative(hidden_layer)

# Update weights and biases


weights_hidden_output += hidden_layer.T.dot(d_output) * learning_rate
bias_output += np.sum(d_output, axis=0) * learning_rate
weights_input_hidden += X_train.T.dot(d_hidden) * learning_rate
bias_hidden += np.sum(d_hidden, axis=0) * learning_rate

# Print loss every 100 epochs


if epoch % 100 == 0:
loss = np.mean(np.square(error))
print(f'Epoch {epoch}, Loss: {loss:.4f}')

# Test the model


hidden_layer_test = sigmoid(np.dot(X_test, weights_input_hidden) +
bias_hidden)
output_layer_test = sigmoid(np.dot(hidden_layer_test,
weights_hidden_output) + bias_output)
predicted_classes = np.argmax(output_layer_test, axis=1)
true_classes = np.argmax(y_test, axis=1)
accuracy = accuracy_score(true_classes, predicted_classes)

print(f'Test Accuracy: {accuracy:.2f}')


OUTPUT:

Epoch 0, Loss: 0.2926


Epoch 100, Loss: 0.0416
Epoch 200, Loss: 0.0239
Epoch 300, Loss: 0.0168
Epoch 400, Loss: 0.0132

RESULT:

Thus the program To Build an Artificial Neural Network by implementing the


Back propagation algorithm and test the same using appropriate data sets.
executed successfully.
Date :
Exercise No :

10 .Using Weka tool for SVM classification for chosen domain


application.
AIM:
Using Weka tool for SVM classification for chosen domain application.

ALGORITHM:
1. Download and Install Weka:
o You can download Weka from the official website: Weka Download
o Follow the installation instructions specific to your operating system.
2. Prepare Your Dataset:

o Ensure your dataset is in ARFF (Attribute-Relation File Format) or CSV


format.
o For this example, we'll use the Iris dataset.
3. Open Weka:

o Launch Weka and select the "Explorer" option.


4. Load Your Dataset:

o Click on the "Open file..." button and load your dataset file (e.g.,
iris.arff or iris.csv).
5. Choose the SVM Classifier:

o Click on the "Classify" tab.


o In the "Classifier" section, click on the "Choose" button.
o Navigate to functions and select SMO (which stands for Sequential
Minimal Optimization, an algorithm for training SVM).
6. Configure SVM Parameters:

o You can configure the SVM parameters by clicking on the text next to
the "Choose" button.
o This will open a dialog where you can set parameters such as the
kernel type, complexity parameter (C), etc.
o For example, you can set the kernel type to PolyKernel for a
polynomial kernel or leave it as RBFKernel for a radial basis function
kernel.
7. Run the Classification:
o In the "Test options" section, choose how you want to evaluate your
model. You can use cross-validation (default is 10-fold) or a separate
test set.
o Click on the "Start" button to train and evaluate the SVM model.
8.View the Results:

o Once the process completes, the results will be displayed in the


"Classifier output" pane.
o You can see metrics such as accuracy, precision, recall, and a
confusion matrix.
STEPS:

1. Load the Iris Dataset:

 Download the Iris dataset in ARFF format from here.


 Open Weka and load the iris.arff file.

2. Choose the SVM Classifier (SMO):

 Go to the "Classify" tab.


 Click "Choose" and select functions > SMO.

3.Configure SVM Parameters:

 Click on the text next to "Choose" (it will look like SMO -C 1.0 -L 0.001 -P
1.0E-12 -N 0 -V -1 -W 1 -K
"weka.classifiers.functions.supportVector.PolyKernel -E 1.0").
 Set any desired parameters. For a simple linear kernel, use PolyKernel
with an exponent of 1.

4. Run the Classification:

 Use the default 10-fold cross-validation.


 Click "Start" to begin the classification.

5. View the Results:

 Check the "Classifier output" pane for results.


 You will see detailed output, including the accuracy, precision, recall,
and confusion matrix.
OUTPUT:

=== Classifier model (full training set) ===

SMO

...

Correctly Classified Instances 96 96 %

Incorrectly Classified Instances 4 4 %

Kappa statistic 0.94

Mean absolute error 0.0386

Root mean squared error 0.1644

Relative absolute error 12.25 %

Root relative squared error 35.88 %

Total Number of Instances 100

=== Confusion Matrix ===

a b c <-- classified as

50 0 0 | a = Iris-setosa

0 46 4 | b = Iris-versicolor

0 1 49 | c = Iris-virginica
RESULT:
Thus Using Weka tool for SVM classification for chosen domain application has
been executed successfully.

You might also like