Experiment-3: Nisheeta Thakur 121A1114 D2
Experiment-3: Nisheeta Thakur 121A1114 D2
Experiment-3: Nisheeta Thakur 121A1114 D2
121A1114 D2
Experiment-3
Theory:
Multivariate Linear Regression:
Multivariate Linear Regression refers to regression involving multiple dependent
variables (DV). This contrasts with simple multiple regression, where there is only
one dependent variable.
The equation for multivariate linear regression can be generalized as:
Y=Xβ+ϵ
where:
Y: n×m matrix of dependent variables (responses), with m being the number
of dependent variables.
X: n×(p+1) matrix of independent variables (including a column of 1s for
the intercept).
β: (p+1)×m matrix of regression coefficients.
ϵ: n×m matrix representing the error or residuals.
The optimal parameters β can be estimated using the normal equation:
This method minimizes the sum of squared errors to find the best-fitting line(s).
Part 1: One Independent Variable and Multiple Dependent Variables (1
Independent Variable, Multiple Dependent Variables)
Case Explanation:
In this case, we have one independent variable XXX and multiple dependent
variables Y1, Y2,…,Ym. The goal is to find the regression coefficients for each
dependent variable with respect to the single independent variable.
The equation becomes:
In matrix form, the optimal parameter matrix β hat is again computed as:
Here, both X and Y are matrices with multiple columns representing multiple
predictors and responses, respectively.
Program :
Output :
Conclusion: