0% found this document useful (0 votes)
17 views24 pages

ML (Sudhanshu)

The document is a lab manual containing various Python programming exercises, including generating random passwords, demonstrating random methods, performing exploratory data analysis, implementing linear regression, and using machine learning algorithms like KNN and SVM. Each section includes code snippets, expected outputs, and explanations for tasks such as data visualization, handling missing values, and evaluating model accuracy. The manual serves as a comprehensive guide for students to practice and understand key concepts in data science and machine learning.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
17 views24 pages

ML (Sudhanshu)

The document is a lab manual containing various Python programming exercises, including generating random passwords, demonstrating random methods, performing exploratory data analysis, implementing linear regression, and using machine learning algorithms like KNN and SVM. Each section includes code snippets, expected outputs, and explanations for tasks such as data visualization, handling missing values, and evaluating model accuracy. The manual serves as a comprehensive guide for students to practice and understand key concepts in data science and machine learning.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 24

Lab Manual

Name : Sudhanshu kumar singh


Roll NO: 22BTH002
Q1. Write a Python program to generate 8 characters mixed random password with ascii
letters and digits.

Code:-

import random as r

import string as s

passwd=r.sample(s.ascii_letters + s.digits ,8)

passwd="".join(passwd)

print("Password ---->> ",passwd)

passwd=r.sample(s.ascii_letters + s.digits ,8)

passwd="".join(passwd)

print("Password ---->> ",passwd)

passwd=r.sample(s.ascii_letters + s.digits ,8)

passwd="".join(passwd)

print("Password ---->> ",passwd)

Output:-

Password ---->> r5JQG4Cn


Password ---->> GjVimoB4
Password ---->> B3COelvM

Q2. Demonstrate the following random method in Python with appropriate outputs:

i. random() ii. randint() iii. uniform() iv. sample()

Solution:-

i . random() : It is use to take a random floating number between 0 to 1 .

Code:-
import random as r

print("Random Floating value ------>>> ",r.random())

print("Random Floating value ------>>> ",r.random())

print("Random Floating value ------>>> ",r.random())

Output:-
Random Floating value ------>>> 0.13906128209448765
Random Floating value ------>>> 0.3077718443686104
Random Floating value ------>>> 0.4585025867516742

ii . randint() : It is use to take random Integer value by given range.

Code:-

import random as r

print("Random integer value between 1 to 100 ----->>",r.randint(1,100))

print("Random integer value between 1 to 100 ----->>",r.randint(1,100))

print("Random integer value between 1 to 100 ----->>",r.randint(1,100))

Output:-

Random integer value between 1 to 100 ----->> 47


Random integer value between 1 to 100 ----->> 100
Random integer value between 1 to 100 ----->> 32

iii . uniform(): It is use to take random Floating value by given range.

Code:-

import random as r

print("Random Floating value between 1 to 4 ----->>", r.uniform(1,4))

print("Random Floating value between 1 to 4 ----->>", r.uniform(1,4))

print("Random Floating value between 1 to 4 ----->>", r.uniform(1,4))

Output:-

Random Floating value between 1 to 4 ----->> 1.1859982019476065


Random Floating value between 1 to 4 ----->> 3.6030510171799035
Random Floating value between 1 to 4 ----->> 1.979743121382065
iv. sample() : It is use to take random multiple value in sample (like list , set ,etc).

Code:-

import random as r

a=[11,12,13,14,15,16,17,18,19]

print("Random 2 value in set a ----->>> ",r.sample(a,2))

print("Random 2 value in set a ----->>> ",r.sample(a,2))

print("Random 2 value in set a ----->>> ",r.sample(a,2))

Output:-

Random 2 value in set a ----->>> [18, 13]


Random 2 value in set a ----->>> [17, 14]
Random 2 value in set a ----->>> [12, 19]

Q 3. Implement Exploratory Data Analysis on any data set.

➢ How to load dataset .


Code :
import pandas as pd
data = pd.read_csv('data.csv') # Load the dataset
print(data)

Output:
➢ How to draw a graph of data set.

Code :

import matplotlib.pyplot as plt

import numpy as np

x = np.array([1,4,3,8,6])

y = np.array([9, 8, 15, 1,6])

plt.plot(x, y)

plt.show()

Output:
➢ How to Give Title of Data Set in Graph.

Code :

import matplotlib.pyplot as plt

import numpy as np

x = np.array([1, 3, 5, 7,9])

y = np.array([10,19,35,40,50])

plt.plot(x, y)

plt.xlabel("Roll Number")

plt.ylabel("Marks")

plt.title("Student Reports")

plt.show()

Output:
➢ How to draw doted line in a graph of data set .

Code :

import matplotlib.pyplot as plt

import numpy as np

x1 = [10, 30, 50, 90]

y1 = [10, 21, 73, 40]

plt.xlabel("X-axis")

plt.ylabel("Y-axis")

plt.title("My Chart")

plt.plot(x1, y1, '-.')

plt.show()

Output:
➢ How to fill colour b/w two line.

Code :

import matplotlib.pyplot as plt

import numpy as np

x = np.array([2, 8, 6, 9,10])

y = np.array([5, 15, 19, 21,24])

plt.plot(x, y)

x1 = [2, 8, 6, 9,10]

y1 = [5, 21, 23, 24,29]

plt.plot(x, y1, '-.')

plt.xlabel("X-axis data")

plt.ylabel("Y-axis data")

plt.title('multiple plots')
plt.fill_between(x, y, y1, color='black', alpha=.9)

plt.show()

Output;

➢ How to resize a graph

Code :

import matplotlib.pyplot as plt

import numpy as np

x = np.arange(1, 25, 0.1)

y=np.sin(x)

plt.subplot(5,2, 1)

plt.plot(x,y)

plt.show()
Output:

➢ How to add label in graph .

Code:

import matplotlib.pyplot as plt

import numpy as np

x = [1,3,7,14,18,36,40]

plt.plot(x, np.sin(x), label = "line ")

plt.legend()

plt.show()

Output:

➢ How to mark point of line in graph .


Code:

import matplotlib.pyplot as plt

age = [10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25]

cardiac_cases = [16,23,25,27,29,31,32,40,41,43,45,50,56,57,59,61]

plt.xlabel("Age")

plt.ylabel("Percentage")

plt.plot(age, cardiac_cases, color='black', linewidth=2,label=

"Cardiac Cases", marker='o', markerfacecolor='black',markersize=12)

plt.legend(loc='lower right', ncol=1)

plt.show()

Output:

➢ How to make column chart of data set.

Code:

import matplotlib.pyplot as plt


data = {'Html':201, 'Css':200, 'Java Script':123,'React':210}

courses = list(data.keys())

values = list(data.values())

fig = plt.figure(figsize = (10, 5))

plt.bar(courses, values, color ='black',width = 0.4)

plt.xlabel("Courses offered")

plt.ylabel("No. of students enrolled")

plt.title("Students enrolled in different courses")

plt.show()

Output:

➢ How to check in data set having null values .

Code:

import pandas as pd

data = pd.read_csv('data.csv')

print(data.isnull().sum())

Output:
x 0
y 0
dtype: int64

➢ How to perform statistics operation in data set.

Code:

import pandas as pd

data = pd.read_csv('data.csv')

print(data.describe())

Output:

➢ How to check data type of data set.

Code:

import pandas as pd

data = pd.read_csv('data.csv')

print(data.dtypes)

Output:

x float64
y float64
dtype: object

Q – 4 Implement Linear Regression Models and perform the following tasks:

i. Upload the “diabetes” dataset in Python environment


ii. Identify the missing values and count the missing values feature wise.

iii. Visualize the outliers using boxplot graph.

iv. Train the model with 20% test data size.

v. Visualize the linear regression plot using matplotlib library.

Code:-

import matplotlib.pyplot as plt

import pandas as pd

import numpy as np

from sklearn import datasets, linear_model

from sklearn.metrics import mean_squared_error

from sklearn.model_selection import train_test_split

#i. Upload the “diabetes” dataset in Python environment

db=datasets.load_diabetes()

#ii. Identify the missing values and count the missing values feature wise.

dbs=pd.DataFrame(db.data, columns=db.feature_names)

print(dbs.isnull().sum())

Output:

#iii. Visualize the outliers using boxplot graph.

plt.figure(figsize=(10, 6))
plt.boxplot(dbs)

plt.xlabel('No of column')

plt.ylabel('Data')

plt.title('Boxplot of Features')

plt.show()

Output:

#iv. Train the model with 20% test data size.

x=db.data[:,np.newaxis,2]

y=db.target

db_x_train,db_x_test,db_y_train,db_y_test= train_test_split(x,y,

test_size=0.2)

model=linear_model.LinearRegression()

model.fit(db_x_train,db_y_train)

db_y_predicted=model.predict(db_x_test)
#v. Visualize the linear regression plot using matplotlib library.

plt.scatter(db_x_test,db_y_test,color='black')

plt.xlabel('Actual Values')

plt.ylabel('Predicted Values')

plt.title('Linear Regression Plot')

plt.plot(db_x_test,db_y_predicted,color='black')

plt.show()

Output:

Q- 5 Implement and demonstrate the FIND algorithm for finding the most specific
hypothesis based on a given set of training data samples. Read the training data from
a .CSV file.
Code:-

import pandas as pd

# Load training data from CSV file

def load_data(file_name):

data = pd.read_csv(file_name)

return data

# FIND algorithm

def find_algorithm(data, target_attribute):

# Initialize most specific hypothesis

hypothesis = []

# Iterate over each attribute

for attribute in data.columns:

if attribute != target_attribute:

# Initialize attribute values

attribute_values = data[attribute].unique()

# Iterate over each value

for value in attribute_values:

# Create hypothesis for this value

hypothesis.append(f"{attribute} = {value}")

# Refine hypothesis based on training data

refined_hypothesis = []

for hypothesis_value in hypothesis:

attribute, value = hypothesis_value.split(" = ")

for index, row in data.iterrows():

if row[attribute] == value and row[target_attribute] ==


data[target_attribute].unique()[0]:

refined_hypothesis.append(hypothesis_value)

# Return most specific hypothesis

return list(set(refined_hypothesis))

# Test FIND algorithm

data = load_data('data2.csv')

target_attribute = 'class'

hypothesis = find_algorithm(data, target_attribute)

print("Most Specific Hypothesis:")

for hypothesis_value in hypothesis:

print(hypothesis_value)

Output:

Q-6 Implement the logistic regression algorithm and visualize the Sigmoid curve based on
the trained model.

Code:-
import numpy as np

import matplotlib.pyplot as plt

from sklearn.linear_model import LogisticRegression

from sklearn.preprocessing import StandardScaler

# Generate sample data

np.random.seed(0)

X = np.random.randn(100, 1)

y = (X > 0).astype(int)

# Scale data

scaler = StandardScaler()

X_scaled = scaler.fit_transform(X)

# Train logistic regression model

model = LogisticRegression()

model.fit(X_scaled, y)

# Get coefficients

coef = model.coef_[0][0]

intercept = model.intercept_[0]

# Generate Sigmoid curve

x_values = np.linspace(-3, 3, 100)

y_values = 1 / (1 + np.exp(-(coef * x_values + intercept)))

# Plot data and Sigmoid curve

plt.figure(figsize=(8, 6))

plt.scatter(X_scaled, y, label='Data')

plt.plot(x_values, y_values, color='red', label='Sigmoid Curve')

plt.title('Logistic Regression')

plt.xlabel('Input')
plt.ylabel('Probability')

plt.legend()

plt.show()

Output:-

Q-7 Write a program to implement k-Nearest Neighbor (KNN) algorithm to classify the iris
data set. Print both correct and wrong predictions.

Code:-

# Import necessary libraries

from sklearn.datasets import load_iris

from sklearn.model_selection import train_test_split

from sklearn.neighbors import KNeighborsClassifier

from sklearn.metrics import accuracy_score, classification_report,confusion_matrix

import numpy as np
iris = load_iris() # Load Iris dataset

X = iris.data

y = iris.target

# Split data into training and testing sets

X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.24)

knn = KNeighborsClassifier(n_neighbors=5) # Create KNN classifier

knn.fit(X_train, y_train) # Train the model

y_pred = knn.predict(X_test) # Make predictions

# Print wrong predictions

print("\nWrong Predictions:")

for i in range(len(y_test)):

if y_test[i] != y_pred[i]: print(f"Actual: {iris.target_names[y_test[i]]}, Predicted:


{iris.target_names[y_pred[i]]}")

Output:-

# Print correct predictions

print("Correct Predictions:")

for i in range(len(y_test)):

if y_test[i] == y_pred[i]: print(f"Actual: {iris.target_names[y_test[i]]}, Predicted:


{iris.target_names[y_pred[i]]}")
# Evaluate the model

accuracy = accuracy_score(y_test, y_pred)

print("Accuracy:", accuracy)
Output:

Accuracy: 0.9444444444444444

print("Classification Report:")

print(classification_report(y_test, y_pred))

Output:

print("Confusion Matrix:")

print(confusion_matrix(y_test, y_pred))

Output:

Q – 8 Implement Support Vector Machine on any data set and analyze the accuracy with
Logistic regression.

Code:

import pandas as pd

from sklearn import datasets

from sklearn.model_selection import train_test_split


from sklearn.svm import SVC

from sklearn.linear_model import LogisticRegression

from sklearn.metrics import accuracy_score, classification_report, confusion_matrix

from matplotlib import pyplot as plt

iris = datasets.load_iris() # Load Iris dataset

df = pd.DataFrame(data=iris.data, columns=iris.feature_names)

df['target'] = iris.target

df0=df[df.target==0]

df1=df[df.target==1]

df2=df[df.target==2]

# Split data

X_train, X_test, y_train, y_test = train_test_split(df.drop('target',

axis=1), df['target'], test_size=0.43)

svm = SVC(kernel='rbf', C=1) # Implement SVM

svm.fit(X_train, y_train)

lr = LogisticRegression(max_iter=1000) # Implement Logistic Regression

lr.fit(X_train, y_train)

y_pred_svm = svm.predict(X_test) # Predict and evaluate

y_pred_lr = lr.predict(X_test)

print("SVM Accuracy:", accuracy_score(y_test, y_pred_svm))

Output:

print("Logistic Regression Accuracy:", accuracy_score(y_test, y_pred_lr))

Output:
print("SVM Classification Report:")

print(classification_report(y_test, y_pred_svm))

Output:

print("Logistic Regression Classification Report:")

print(classification_report(y_test, y_pred_lr))

Output:

You might also like