0% found this document useful (0 votes)
5 views4 pages

PYHTONPRACT

The document outlines various machine learning techniques using Python's scikit-learn library, including linear regression, K-means clustering, logistic regression, decision trees, support vector machines, and ensemble methods like voting classifiers and random forests. It includes code snippets for model training, evaluation metrics such as mean squared error and accuracy, and data preprocessing steps like train-test splitting. Additionally, it demonstrates the use of the Apriori algorithm for association rule mining.

Uploaded by

ikeepshitlowkey
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views4 pages

PYHTONPRACT

The document outlines various machine learning techniques using Python's scikit-learn library, including linear regression, K-means clustering, logistic regression, decision trees, support vector machines, and ensemble methods like voting classifiers and random forests. It includes code snippets for model training, evaluation metrics such as mean squared error and accuracy, and data preprocessing steps like train-test splitting. Additionally, it demonstrates the use of the Apriori algorithm for association rule mining.

Uploaded by

ikeepshitlowkey
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 4

from sklearn.

linear_model import LinearRegression


x = df.iloc[:, 0].values.reshape(-1, 1)
y = df.iloc[:, 1].values.reshape(-1, 1)
model=LinearRegression()
from sklearn.metrics import mean_squared_error, r2_score
mse = mean_squared_error(y_test, y_pred) # Mean Squared Error
rmse = mse ** 0.5 # Root Mean Squared Error
r2 = r2_score(y_test, y_pred) # R-squared

from sklearn.cluster import KMeans


kmeans = KMeans(n_clusters=3)
kmeans.fit(x)
labels = kmeans.labels_
plt.figure(figsize=(8, 6))
plt.scatter(x[:,0],x[;,1],c=labels)
plt.title("K-Means Clustering (First Two Features)")
plt.xlabel(iris.feature_names[0])
plt.ylabel(iris.feature_names[1])
plt.legend()
plt.show()

from sklearn.linear_model import LogisticRegression


model = LogisticRegression(max_iter=1000)

from sklearn.tree import DecisionTreeClassifier


model = DecisionTreeClassifier()
plt.figure(figsize=(12, 8))
tree.plot_tree(model)
plt.title("Decision Tree Visualization")
plt.show()
from sklearn.svm import SVC
model = SVC(kernel='linear') #

from sklearn.linear_model import Perceptron


model = Perceptron(max_iter=1000, tol=1e-3 )

from sklearn.naive_bayes import GaussianNB


model = GaussianNB()

from mlxtend.frequent_patterns import apriori


from mlxtend.frequent_patterns import association_rules
frequent_itemsets = apriori(df, min_support=0.5, use_colnames=True)
rules = association_rules(frequent_itemsets, metric="confidence", min_threshold=0.7)

from sklearn.linear_model import LogisticRegression


from sklearn.tree import DecisionTreeClassifier
from sklearn.svm import SVC
from sklearn.ensemble import VotingClassifier
log_reg = LogisticRegression(max_iter=1000)
dt_clf = DecisionTreeClassifier()
svm_clf = SVC(probability=True)
voting_clf = VotingClassifier()
estimators=[
('lr', log_reg),
('dt', dt_clf),
('svc', svm_clf)
],
voting='soft'
)

from sklearn.ensemble import RandomForestClassifier


model = RandomForestClassifier(n_estimators=100)

ACCURACY
from sklearn.metrics import accuracy_score, confusion_matrix, classification_report
accuracy = accuracy_score(y_test, pred)
conf_matrix = confusion_matrix(y_test, pred)
class_report = classification_report(y_test, pred)

# Import necessary libraries


from sklearn.datasets import load_iris
from sklearn.model_selection import train_test_split

main body
iris = load_iris()

# Features (X) and target (y)


x = iris.data # Input features
y = iris.target # Target labels

# Split the data into training and testing sets (80% train, 20% test)
x_train, x_test, y_train, y_test = train_test_split(x, y, test_size=0.2, random_state=42)

# Initialize the Decision Tree Classifier


model = anymodel( )

# Train the model on the training data


model.fit(x_train, y_train)

# Make predictions on the test data


pred = model.predict(x_test)

You might also like