0% found this document useful (0 votes)
33 views6 pages

Supervised Learning Expanded Tables

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
33 views6 pages

Supervised Learning Expanded Tables

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 6

Supervised Learning

Supervised learning involves training a model on a labeled dataset to make predictions or


decisions based on new data. It is widely used in predictive modeling.

Characteristic Description
Definition Learning from labeled data to predict
outcomes on unseen data.
Data Requires labeled data (input-output pairs),
where the input data is known and the
output is explicitly defined.
Algorithms Linear Regression, Logistic Regression,
Decision Trees, SVM, KNN, Random Forest.
Applications Spam detection, credit scoring, medical
diagnosis, stock price prediction.
Classification Model
Classification is a supervised learning task where the goal is to predict the class label of
given data points.

Characteristic Description
Definition Predicts discrete labels for input data by
mapping them to predefined categories.
Common Algorithms Logistic Regression, Naive Bayes, Decision
Trees, Random Forest, SVM, Neural
Networks.
Key Metrics Accuracy, Precision, Recall, F1-Score, AUC-
ROC curve.
Examples Email spam classification, fraud detection,
image recognition, disease diagnosis.
K-Nearest Neighbors (KNN)
KNN is a simple, non-parametric, and lazy supervised learning algorithm used for both
classification and regression tasks.

Characteristic Description
Definition Instance-based algorithm that predicts
based on the majority class of the nearest
neighbors (for classification) or average
value (for regression).
Hyperparameters K (number of neighbors), Distance metric
(e.g., Euclidean, Manhattan).
Advantages Simple implementation, no training phase,
adapts to complex patterns.
Disadvantages Computationally expensive, sensitive to
noise and irrelevant features.
Applications Recommendation systems, anomaly
detection, handwriting recognition.
Support Vector Machine (SVM)
SVM is a powerful supervised learning algorithm that works by finding the optimal
hyperplane that separates classes in the feature space.

Characteristic Description
Definition Classifies data by maximizing the margin
between data points of different classes
using a hyperplane.
Kernel Functions Linear, Polynomial, Radial Basis Function
(RBF), Sigmoid.
Strengths Effective in high-dimensional spaces,
robust against overfitting.
Limitations Computationally expensive, sensitive to
kernel choice and parameter tuning.
Applications Text classification, image recognition,
bioinformatics, face detection.
Decision Tree Learning
Decision tree learning involves creating a tree-like structure where each internal node
represents a test on a feature, each branch represents the outcome, and each leaf represents
a class or regression value.

Characteristic Description
Definition A flowchart-like model that splits data into
subsets based on feature values.
Algorithms ID3, CART, C4.5, CHAID.
Strengths Easy to interpret, handles both numerical
and categorical data, no scaling needed.
Weaknesses Prone to overfitting, can become unstable
with small data changes.
Applications Customer segmentation, risk analysis,
resource allocation.
Regression and Types
Regression is a statistical method used in supervised learning to model relationships
between a dependent variable and one or more independent variables.

Type Description
Linear Regression Predicts a continuous output by modeling a
linear relationship between input features
and the target variable.
Logistic Regression Classifies binary or categorical data by
estimating probabilities using a logistic
function.
Polynomial Regression Extends linear regression by fitting a
polynomial curve to the data.
Ridge Regression Adds L2 regularization to reduce
overfitting by penalizing large coefficients.
Lasso Regression Uses L1 regularization to perform feature
selection by driving some coefficients to
zero.
Support Vector Regression (SVR) Extension of SVM to predict continuous
outcomes.
Applications Sales forecasting, price prediction,
healthcare analysis, weather modeling.

You might also like