Linear Regression
Linear Regression
Linear Regression
Techniques(AI)
Types of Machine Learning
- Classification
01 Supervised Learning - Regression
- Clustering
02 Unsupervised Learning - Dimensionality Reduction
- Self Training
03 Semi-supervised Learning - Co Training
- Q- Learning
04 Reinforcment Learning - Deep Reinforcement Learning
Supervised ML Techniques
Tree Based
Linear Models Models Nearest Neighbor
Salary Prediction
Age Prediction
House Price Prediction
Linear Regression
Simple Linear Regression
One Variable Regression = Unvariate Regression
Hence a = 5
Y(hat) = bx + a
B is positive
A is 5
So, Y(hat) = bx +5
Formula
Function: Fw,b(x) = wx + b
Parameters: w, b
Cost Function
Cost Function is our way to make sure how good our
model is
Where:
Our Goal
GOAL 2
Where:
Why do we get the squared difference not just the
difference?
1) Sensitivity to Outliers :
- The squared difference deals larger errors (outliers) more heavily compared to absolute
difference
- Imagine a data point with a very high or low target value compared to the rest. The
squared term in (yi - ŷi)^2 amplifies the error for this point.
- This forces the model to pay more attention to fitting these outliers during training,
leading to a more robust model that's less susceptible to extreme values.
Dividing by the total number of data points (m) directly calculates the average squared error. This
is the most common normalization approach.
It provides a clear interpretation of the cost function value, representing the average squared
difference between predicted and actual values.
Function:
Parameters: w, b
Cost Function:
Cost Function For One Parameter
Function:
Parameters:
Cost Function:
Goal:
Cost Function For One Parameter
For b = 0 and fixed w For w =1
J(w)= 0
Cost Function For One Parameter
For b = 0 and fixed w For w = 0.5
J(w)= 0.583
Cost Function For One Parameter
For b = 0 and fixed w For w = 0
J(w)= 2.3
Cost Function For One Parameter
For b = 0 and fixed w For w = n
Cost Function For Two Parameter
Function:
Parameters: w, b
Cost Function:
Cost Function For Two Parameter
Scalability: Linear regression is not computationally heavy and, therefore, fits well in
cases where scaling is essential. For example, the model can scale well regarding
increased data volume (big data).
Key benefits of linear regression
Optimal for online settings: The model can be trained and retrained with each
new example to generate predictions in real-time, unlike the neural networks or support
vector machines that are computationally heavy and require plenty of computing resources
and substantial waiting time to retrain on a new dataset