Linear Regression
Linear Regression
variables.
response variable).
The equation for a simple linear regression model can be written as
follows:
y = b0 + b1 * x
the intercept term (the value of y when x is zero), and b1 is the slope
The goal of Linear Regression is to find the best values for b0 and b1
such that the line best fits the data points, minimizing the errors or
the difference between the predicted values and the actual values.
There are two main types of Linear Regression models: Simple Linear
written as:
Y = b0 + b1 * X
Y = b0 + b1 * X1 + b2 * X2 + … + bn * Xn
variables, b0 is the intercept term, and b1, b2, …, bn are the slope
coefficients.
In both types of linear regression, the goal is to find the best values
for the intercept and slope coefficients that minimize the difference
between the predicted values and the actual values. Linear regression
the model. The best-fitting line is the line that has the smallest
predicted values and the actual values for each data point, and then
Once the values of c and m are determined, we can use the linear
the intercept term (the value of y when x is zero), and m is the slope
In this tutorial you can learn how the gradient descent algorithm
linear regression is, then we define the loss function. We learn how
is the slope of the line and c is the y intercept. Today we will use
this equation to train our model with a given dataset and predict
error.
Loss Function
The loss is the error in our predicted value of m and c. Our goal
and c.
So we square the error and find the mean. hence the name Mean
Squared Error. Now that we have defined the loss function, lets
c.
The Gradient Descent Algorithm
Function.
wants to get to the bottom of the valley. He goes down the slope
and takes large steps when the slope is steep and small steps
when the slope is less steep. He decides his next position based
on his current position and stops when he gets to the bottom of
step by step:
accuracy.
following equation:
value of m and c that we are left with now will be the optimum
values.
slope and L can be the speed with which he moves. Now the new
next position, and L×D will be the size of the steps he will take.
to our loss = 0.
make predictions !
Now let’s convert everything above into code and see our model
in action !
import numpy as np
import pandas as pd
plt.rcParams['figure.figsize'] = (12.0,
9.0)
data = pd.read_csv('data.csv')
X = data.iloc[:, 0]
Y = data.iloc[:, 1]
plt.scatter(X, Y)
plt.show()
# Building the model
m = 0
c = 0
for i in range(epochs):
m = m - L * D_m # Update m
c = c - L * D_c # Update c
print (m, c)
1.4796491688889395 0.10148121494753726
# Making predictions
Y_pred = m*X + c
plt.scatter(X, Y)
plt.show()
Gradient descent is one of the simplest and widely used