ML Experiment - 9 - Final
ML Experiment - 9 - Final
Step-by-Step Execution:
Step 1: Import Required Libraries
import numpy as np
import pandas as pd
from sklearn... (modules)
• Required for data handling, model building, and evaluation.
Split Data
train_test_split(...test_size=0.2)
Train Classifier
knn_classifier = KNeighborsClassifier(n_neighbors=3)
knn_classifier.fit(...)
Split Data
train_test_split(...test_size=0.2)
Train Regressor
knn_regressor = KNeighborsRegressor(n_neighbors=5)
knn_regressor.fit(...)
Computational
High (for large datasets) High (for large datasets)
Cost
Source Code:
# Import required libraries
import numpy as np
import pandas as pd
from sklearn.model_selection import train_test_split
from sklearn.neighbors import KNeighborsClassifier, KNeighborsRegressor
from sklearn.metrics import accuracy_score, mean_squared_error, r2_score
from sklearn.datasets import load_iris, fetch_california_housing
# ---------------------------------------------
# Load dataset
iris = load_iris()
X_cls = iris.data
y_cls = iris.target
# ---------------------------------------------
# Load dataset
housing = fetch_california_housing()
X_reg = housing.data
y_reg = housing.target
Output
===== KNN Classification: Iris Dataset =====
Predicted Labels: [1 0 2 1 1 0 1 2 1 1 2 0 0 0 0 1 2 1 1 2 0 2 0 2 2 2 2 2 0 0]
Actual Labels : [1 0 2 1 1 0 1 2 1 1 2 0 0 0 0 1 2 1 1 2 0 2 0 2 2 2 2 2 0 0]
Classification Accuracy: 100.00%
This proves that KNN is a versatile and easy-to-use machine learning algorithm suitable
for both classification and regression tasks.