Record
Record
1
Regression Model
Aim : To write a program for regression model using R. a)Import a data from webpage.
Algorithm :
Step 1 : Start
Step 8 : Stop
Source code :
library(ggplot2)
df <- read.csv(url)
summary(model)
geom_point(color = 'blue') +
theme_minimal( )
Page no. 2
Output:
Exp 2 : Page no. 3
Logistic regression
Aim: To write a R program for logistic regression model
Algorithm:
Step 1: Start
Step 3: Generate data: Create random GRE, GPA, Rank, and Admission data.
Step 4: Fit model: Logistic regression (glm) with Admission as the target.
Source code
library(dplyr)
library(ggplot2)
set.seed(123)
n <- 200
model <- glm(Admission ~ GRE + GPA + Rank, data = data, family = binomial)
summary(model)
geom_point() +
x = "GRE Score",
theme_minimal()
Output:
Exp : 3 Page no. 5
Apriori algorithm
Aim: To implement apriori algorithm in python
Algorithm:
Step 1: Start
Step 2: Input Dataset: Provide a list of transactions, where each transaction is a list of items.
Step 4: Apply Apriori Algorithm: Use the apriori() function to find frequent itemsets with a
minimum support threshold.
Step 5: Generate Association Rules: Use association_rules() to generate rules from the
frequent itemsets, filtering based on a metric like "lift".
Step 7:Stop
Source code:
import pandas as pd
['Milk', 'Bread'],
['Bread', 'Butter'],
['Milk', 'Butter']]
df = df.notna().astype('int')
print(frequent_itemsets)
print(rules)
Page no. 6
Output:
Exp : 4 Page no. 7
Algorithm :
Step 1 : Start
Step 7 : Visualize the predictions using a scatter plot of the first two features
Step : Stop
Source code:
import pandas as pd
iris = datasets.load_iris()
df=pd.DataFrame(iris.data,columns=iris.feature_names)
print(df)
X = iris['data']
y = iris['target']
Page no. 8
knn3 = KNeighborsClassifier(n_neighbors = 3)
knn3.fit(X_train, y_train)
y_pred_3 = knn3.predict(X_test)
plt.show()
Output :
Page no. 9
Exp 5 page no. 10
Algorithm :
Step 1 : Start
Step 7 : Visualize the trained decision tree with feature and class names
Step 8 : Stop
Source code :
iris = load_iris()
X = iris.data # Features
y = iris.target # Labels
clf = DecisionTreeClassifier()
clf.fit(X_train, y_train)
y_pred = clf.predict(X_test)
print("Accuracy:", accuracy)
plt.figure(figsize=(12, 8))
plt.show()
Output :
Exp : 6 Page no. 12
K Means Algorithm
Algorithm :
Step 1 : start
Step 7 : Plot the data points colored by cluster labels using the first two features
Step 8 : Stop
Source code :
iris = load_iris()
X = iris.data
kmeans.fit(X)
labels = kmeans.labels_
print("Cluster centers:")
print(kmeans.cluster_centers_)
plt.xlabel("Feature 1")
plt.ylabel("Feature 2")
plt.show()
Output :
Exp 7 : Page no. 14
K-Medoids Algorithm
Algorithm :
Step 1 : start
Step 4 : Assign each data point to the nearest medoid (using Euclidean distance)
Step 5 : Update each medoid to the most central point (minimize total distance in its cluster)
Step 6 : Repeat steps 3–4 until medoids do not change or max iterations reached
Step 7 : Plot the final clusters and medoids using the first two features
Step 8 : Stop
Source code :
import numpy as np
import random
iris = load_iris()
X = iris.data
return np.linalg.norm(a - b)
clusters = {}
clusters[idx] = []
Page no.15
for point in X:
cluster_idx = np.argmin(distances)
clusters[cluster_idx].append(point)
return clusters
cost = 0
return cost
def update_medoids(clusters):
new_medoids = []
min_cost = float('inf')
best_medoid = None
min_cost = cost
best_medoid = candidate
new_medoids.append(best_medoid)
return new_medoids
initial_medoids = random.sample(list(X), k)
medoids = initial_medoids
for i in range(max_iter):
new_medoids = update_medoids(clusters)
break
medoids = new_medoids
k=3
points = np.array(points)
plt.xlabel("Feature 1")
plt.ylabel("Feature 2")
plt.legend()
plt.show()
Output :
Exp 8 Page no. 19
Classification Model
Algorithm :
Step 1 : Start
Step 5 : Split data into training (70%) and testing (30%) sets
Step 11 : stop
Source code :
install.packages("caret")
install.packages("e1071")
library(caret)
library(e1071)
set.seed(123)
print(knn_model)
# Make sure predictions and actual values are factors with the same levels
print(conf_matrix)
Output :
Exp : 9 Page no. 22
Algorithm :
Step 1 : Start
Step 7 : Stop
Source code :
library(ggplot2)
library(corrplot)
library(dplyr)
data(iris)
print("Correlation Matrix:")
print(cor_matrix)
summary(anova_model)
geom_boxplot() +
theme_minimal()
Output :
Page no. 24