Open In App

ML | Multiple Linear Regression using Python

Last Updated : 10 Apr, 2025
Comments
Improve
Suggest changes
Like Article
Like
Report

Linear regression is a fundamental statistical method widely used for predictive analysis. It models the relationship between a dependent variable and a single independent variable by fitting a linear equation to the data.

Multiple Linear Regression is an extension of this concept that allows us to model the relationship between a dependent variable and two or more independent variables. This technique is used to understand how multiple features collectively affect the outcome, fitting a linear equation to predict the dependent variable based on the values of these features.

Steps for Multiple Linear Regression

Steps to perform multiple linear Regression are almost similar to that of simple linear Regression difference lies in the evaluation. We can use it to find out which factor has the highest impact on the predicted output and how different variables relate to each other.

Equation for multiple linear regression is:

[Tex]y = \beta_0 + \beta_1 X_1 + \beta_2 X_2 + \cdots + \beta_n X_n[/Tex]

  • [Tex]y [/Tex]is the dependent variable
  • [Tex]X_1, X_2, \cdots X_n [/Tex] are the independent variables
  • [Tex]\beta_0[/Tex] is the intercept
  • [Tex]\beta_1,\beta_2, \cdots \beta_n[/Tex]are the slopes

The goal of the algorithm is to find the best fit line equation that can predict the values based on the independent variables. Regression model learns a function from the dataset (with known X and Y values) and uses it to predict Y values for unknown X.

Handling Categorical Data with Dummy Variables

In multiple regression model, we often encounter categorical data, such as gender (male/female), location (urban/rural) etc. Since regression models typically expect numerical inputs, categorical data must be transformed into a usable form.

This is where Dummy Variables come into play. Dummy variables are binary variables (0 or 1) that represent the presence or absence of each category. For example:

  • Male: 1 if male, 0 otherwise
  • Female: 1 if female, 0 otherwise

In the case of multiple categories (e.g. color: “Red”, “Blue”, “Green”) we create a dummy variable for each category, excluding one to avoid multicollinearity. This process is called one-hot encoding, which converts categorical variables into a format suitable for regression models.

Multicollinearity in Multiple Linear Regression

When building a multiple linear regression model multicollinearity can arise. It occurs when two or more independent variables are highly correlated with each other. This can make it difficult to evaluate the individual contribution of each variable to the dependent variable.

Detecting Multicollinearity includes two techniques:

  1. Correlation Matrix: Examining the correlation matrix among the independent variables is a common way to detect multicollinearity. High correlations (close to 1 or -1) indicate potential multicollinearity.
  2. VIF (Variance Inflation Factor): VIF is a measure that quantifies how much the variance of an estimated regression coefficient increases if your predictors are correlated. A high VIF (typically above 10) suggests multicollinearity.

Assumptions of Multiple Regression Model

Just like simple linear regression we have use some assumptions in multiple linear regression also:

  1. Linearity: Relationship between dependent and independent variables should be linear.
  2. Homoscedasticity: Variance of errors should remain constant across all levels of independent variables.
  3. Multivariate Normality: Residuals should follow a normal distribution.
  4. No Multicollinearity: Independent variables should not be highly correlated.

Implementing Multiple Linear Regression Model in Python

We will use the California Housing dataset, which includes features like median income, average rooms and the target variable, house prices.

1. Importing Libraries

We will import numpy, pandas, matplotlib and scikit learn for this.

Python
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LinearRegression
from sklearn.datasets import fetch_california_housing

2. Load Dataset

  • Fetches the California Housing dataset from sklearn.datasets.
  • Dataset contains features (such as median income, average rooms) stored in X and the target (house prices) is stored in y.
Python
california_housing = fetch_california_housing()

X = pd.DataFrame(california_housing.data, columns=california_housing.feature_names)
y = pd.Series(california_housing.target)

3. Select Features for Visualization

Selects two features (MedInc for median income and AveRooms for average rooms) to simplify the visualization to two dimensions.

Python
X = X[['MedInc', 'AveRooms']]

4. Train-Test Split

We will use 80% data for training and 20% for testing.

Python
X_train, X_test, y_train, y_test = train_test_split(
    X, y, test_size=0.2, random_state=42)

5. Initialize and Train Model

Python
model = LinearRegression()

model.fit(X_train, y_train)

6. Make Predictions

Python
y_pred = model.predict(X_test)

7. Visualizing Best Fit Line in 3D

Python
fig = plt.figure(figsize=(10, 7))
ax = fig.add_subplot(111, projection='3d')

ax.scatter(X_test['MedInc'], X_test['AveRooms'],
           y_test, color='blue', label='Actual Data')

x1_range = np.linspace(X_test['MedInc'].min(), X_test['MedInc'].max(), 100)
x2_range = np.linspace(X_test['AveRooms'].min(), X_test['AveRooms'].max(), 100)
x1, x2 = np.meshgrid(x1_range, x2_range)

z = model.predict(np.c_[x1.ravel(), x2.ravel()]).reshape(x1.shape)

ax.plot_surface(x1, x2, z, color='red', alpha=0.5, rstride=100, cstride=100)

ax.set_xlabel('Median Income')
ax.set_ylabel('Average Rooms')
ax.set_zlabel('House Price')
ax.set_title('Multiple Linear Regression Best Fit Line (3D)')

plt.show()

Output: 

download9

Visualizing Multiple Linear Regression

Blue points represent the actual house prices based on the features (MedInc and AveRooms) and the red surface represents the best fit plane predicted by the multiple linear regression model. This plot shows how two selected features influence the predicted house prices.
 



Next Article

Similar Reads