0% found this document useful (0 votes)
11 views3 pages

ML 8,9,10

The document outlines two experiments involving machine learning algorithms: Support Vector Machine (SVM) for classification and linear regression for predictive analysis. SVM is described as a robust algorithm for both linear and nonlinear classification tasks, with applications in various fields, achieving 100% accuracy in the experiment. The linear regression section details a simple approach to predict continuous variables, demonstrating the generation of synthetic data and visualization of the regression line.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF or read online on Scribd
0% found this document useful (0 votes)
11 views3 pages

ML 8,9,10

The document outlines two experiments involving machine learning algorithms: Support Vector Machine (SVM) for classification and linear regression for predictive analysis. SVM is described as a robust algorithm for both linear and nonlinear classification tasks, with applications in various fields, achieving 100% accuracy in the experiment. The linear regression section details a simple approach to predict continuous variables, demonstrating the generation of synthetic data and visualization of the regression line.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF or read online on Scribd
You are on page 1/ 3
EXPERIMENT: 8 AIM: Apply Support Vector algorithm for classification A Support Vector Machine (SVM) is a powerful machine learning algorithm widely used for both linear and nonlinear classification, as well as regression and outlier detection tasks. SVMs are highly adaptable, making them suitable for various applications such as text classification, image classification, spam detection, handwriting identification, gene expression analysis, face detection, and anomaly detection, SVMs are particularly effective because they focus on finding the maximum separating hyperplane between the different classes in the target feature, making them robust for both binary and multiclass classification. In this outline, we will explore the Support Vector Machine (SVM) algorithm, its applications, and how it effectively handles both linear and nonlinear classification, as well as regression and outlier detection tasks. # Imort necessary Libraries import nunpy ‘from sklearn ets fron sklearn.nodel_selection inport train test split from sklearn.svm inport SVC from sklearn.netrics inport classification report, accuracy_score # Load the Inis dataset inis = datasets. load dris() X= inisdata # Features y= inis.tanget # Lobels # Split the dataset into training ond testing sets Xtrain, Xtest, y_train, y test = train_test_split(X, y, test_size=0.3, random state=42) # Create and train the Support Vector Classifier (SVC) sum_classifier = sve(kernel='linear', C=1.@, random_state=42) # ‘Lineor' hernel used here sum_classifier.Fit(X train, y_train) # Nake predictions on the test data ypred = sv_classifier.predict(x test) # Evaluate the model print("Accuracy:", accuracy_score(y_test, y_pred)) print("\nClassitication Report:\n", classification report(y_test, y_pred)) Accuracy: 1.0 Classification Report: precision recall f1-score support e 1.00 2.08 1.00 1s a alee alee alee 3 2 alee alee alee 2B accuracy 1.00 as macro ave 2.00 2.00 1.00 4s weighted ave 1.00 i.08 1.00 45 EXPERIMENT-9 AIM: Demonstrate simple linear regression algorithm for a regression problem, Description: Linear regression is one of the easiest and most popular Machine Learning algorithms. It is a statistical method that is used for predictive analysis. Linear regression makes predictions for continuous/real ot ‘numeric variables such as sales, salary, age, product price, etc. 4 Hine of pendent arible Independent Variables 3 import numpy as np Amport matplotlib pyplot as pit # Generate synthetic data np-randon.ceed(42) # For reproducibility X= 2* np-random.rand(1ee, 1) # 108 samples, single feature y=443°6 x4 np.random.randn(109, 1) # Linear equation with sone noise w Visualize the data plt.ccatter(X, y, color="blue", label='Osta Points’) plt.xlabel "*") plt-ylabel("y") Plt title("synthotic Dats") plt.legena() Plt. show() # Add bios term (XO = 2 for oLL samples) Xb = np.c_[np.ones((129, 1)), x] # Add intercept term to x © closed-form solution (Normal Equation): theta 2 OCT EXIT) OT ty Sinp-Linalg-inv(Xb.7 @ Xb) @Xb-T OY # Extract coefficients intercept, slope = thetat est[e, 2], theta_best[1, 2] print(f"intercept: {intercept}") print(f"Slope: {slope}") # Predict values using the model yupred = Xb @ theta_best # Plot the regression Line plt.scatter(x, y, color="blu plt.plot(x, y_pred, color="red", label: plt.xlabel("x") pit. ylabe: pit.title pit. legend) plt-show() “vata Points") Regression Line") output Synthetic Data af © ome Fone 00 025 (050 075 100 125 150 175 200

You might also like