0% found this document useful (0 votes)
3 views

Supervised Machine Learning Regression and Classification

The document presents an overview of Supervised Machine Learning, focusing on regression and classification techniques. It explains key concepts such as linear regression, logistic regression, cost function optimization, and regularization methods to prevent overfitting. The conclusion emphasizes the significance of these techniques in making accurate predictions and classifications.

Uploaded by

Sourav Bisht
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views

Supervised Machine Learning Regression and Classification

The document presents an overview of Supervised Machine Learning, focusing on regression and classification techniques. It explains key concepts such as linear regression, logistic regression, cost function optimization, and regularization methods to prevent overfitting. The conclusion emphasizes the significance of these techniques in making accurate predictions and classifications.

Uploaded by

Sourav Bisht
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 10

Supervised Machine Learning : Regression

and Classification
made by Sourav Bisht

Submitted To : Mr. Samir Rana

Course : BTech CSE

Sec : A2

Roll No : 2219749 (65)

A presentation on MOOCS course completed by self.


What is Machine Learning?
Machine Learning is a subset of Artificial Intelligence that focuses on the development of algorithms and
models that allow computer systems to learn from data and make predictions or decisions without being
explicitly programmed. It involves training a machine to recognize patterns and make data-driven decisions
or predictions. There are various types of Machine Learning techniques, such as supervised learning,
unsupervised learning, and reinforcement learning, each suited for different types of problems.

Types of Machine Learning:

1. Supervised Machine Learning


2. Unsupervised Machine Learning
3. Reinforcement Machine Learning
Supervised Machine
Learning: Regression and
Classification
Regression and Classification are two types of problems that
are solved using Supervised Machine Learning. Regression
involves predicting continuous values, while classification
involves predicting categorical values. Regression algorithms
include Linear Regression, Polynomial Regression, and Support
Vector Regression, while classification algorithms include
Logistic Regression, Decision Trees, and Support Vector
Machines. These algorithms are used in various applications
such as predicting house prices, stock prices, and diagnosing
medical conditions.
Understanding Linear Regression
Below are the major use case for which the linear regression is used for. Although linear regression is simple
approach and pure application of mathematics but it is quite good for solving major problems.

Predicting Continuous Values Simple and Intuitive

Linear regression models a linear relationship This technique is straightforward to understand and
between variables. It predicts a continuous output implement, making it a common choice for various
based on input features. applications.
Multiple Linear Regression
1 Expanding 2 Complex
Predictability Relationships
Multiple linear It captures more
regression incorporates complex relationships
multiple independent between variables,
variables to enhance providing insights into
predictive power. the combined effect of
different factors.
Logistic Regression
Categorical Probability Estimation
Classification
It calculates the
Logistic regression probability of an
classifies data into observation belonging to
discrete categories, such a particular category
as yes/no, true/false, or based on its features.
positive/negative.
Cost Function and Optimization

Evaluating Performance Minimizing Error


The performance of a machine Minimizing error is crucial in
learning model can be evaluated machine learning, as it helps
using a cost function, which improve the accuracy and
measures the disparity between reliability of the model's
the predicted and actual values. predictions. By finding the optimal
By optimizing the cost function, values for the model's parameters
the model can be trained to through the chosen optimization
minimize errors and improve its algorithm, the model can
performance. Various optimization effectively minimize error and
algorithms, such as gradient increase its predictive power.
descent, can be used to find the
optimal values for the model's
parameters.
Gradient Descent
1 Iterative Approach
Gradient descent updates model parameters
iteratively in the direction of decreasing cost.

2 Learning Rate
The learning rate determines the step size
during parameter updates.

3 Convergence
The algorithm continues until it reaches a
minimum cost or a predetermined stopping
criterion.
Regularization Techniques
Preventing Overfitting
Regularization techniques are used to prevent overfitting, which
occurs when a model is too complex and starts to memorize the
training data rather than learning the underlying patterns.
Techniques like L1 and L2 regularization introduce a penalty term to
the cost function, discouraging excessive complexity in the model
and promoting simpler and more generalizable solutions.
Regularization helps improve the model's performance on unseen
data and reduces the chances of overfitting.

L1 and L2 Regularization
L1 and L2 regularization are two commonly used techniques. L1
regularization adds a penalty term to the loss function based on the
absolute values of the model's parameters, while L2 regularization
adds a penalty term based on the squared values of the
parameters. These techniques help control the complexity of the
model and prevent overfitting by discouraging large parameter
values.
Concluding Remarks and
Key Takeaways
Supervised machine learning offers powerful tools for
prediction and classification, with various techniques suitable
for different scenarios. Key takeaways include the importance
of understanding cost function and gradient descent for
optimization, and regularization to prevent overfitting.

You might also like