0% found this document useful (0 votes)
9 views8 pages

R Lab3

This document is a lab sheet for an R Programming course focused on linear algebra applications in data science. It outlines various exercises involving vectors, matrices, and operations such as addition, multiplication, and decomposition, along with their practical applications in machine learning. Additionally, it discusses the challenges of learning linear algebra concepts and their relevance to real-world data science tasks.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
9 views8 pages

R Lab3

This document is a lab sheet for an R Programming course focused on linear algebra applications in data science. It outlines various exercises involving vectors, matrices, and operations such as addition, multiplication, and decomposition, along with their practical applications in machine learning. Additionally, it discusses the challenges of learning linear algebra concepts and their relevance to real-world data science tasks.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 8

19EAC385 R Programming Lab

Date: ..../....../......

Lab Sheet 3
Linear Algebra using R
Aim
• To perform linear algebra in R environment

Introduction
Linear algebra in data science refers to the use of mathematical concepts involving
vectors, matrices, and linear transformations to manipulate and analyze data. It
provides useful tools for most algorithms and processes in data science, such as
machine learning, statistics, and big data analytics. It turns theoretical data models
into practical solutions that can be applied in real-world situations.

Exercises
1. Vectors: Perform the following functions: Create one hori-
zontal and vertical vector. Perform the arithmetic operations
on them: vector addition, subtraction, Multiplication and di-
vision, vector dot product, cross product, L1 Norm, L2 Norm,
L∞ Norm

Department of ECE 21 Sarang KP [AM.EN.U4EAC22061]


19EAC385 R Programming Lab

Figure 1

Department of ECE 22 Sarang KP [AM.EN.U4EAC22061]


19EAC385 R Programming Lab

0.0.1 Inference: The vector operations involve creating horizontal and


vertical vectors, performing arithmetic operations element-wise,
computing the dot and cross products, and calculating L1, L2,
and L norms to measure vector magnitude.

2. Matrix operations: Addition, Subtraction, Scalar Multi-


plication, Matrix Multiplication, Transpose, Inverse.

Figure 2

Department of ECE 23 Sarang KP [AM.EN.U4EAC22061]


19EAC385 R Programming Lab

0.0.2 Inference: Matrix operations include addition, subtraction, scalar


multiplication, and matrix multiplication, along with computing
the transpose using t() and checking invertibility using det() be-
fore applying solve().

3. Matrix determinants: Adjoint

Figure 3

0.0.3 Inference
The determinant of a matrix is calculated using det(A), and the adjoint (inverse
multiplied by determinant) is computed using solve(A) * det(A).

Department of ECE 24 Sarang KP [AM.EN.U4EAC22061]


19EAC385 R Programming Lab

4. Matrix decomposition: LU, QR, NMF, Eigen decomposi-


tion, SVD

(a) LU (b) QR

(c) Eigen (d) SVD

Figure 4

Department of ECE 25 Sarang KP [AM.EN.U4EAC22061]


19EAC385 R Programming Lab

0.0.4 Inference: Matrix decompositions involve breaking a matrix into


simpler components: LU splits into lower (L) and upper (U) matri-
ces with lu(), QR factorizes into an orthogonal (Q) and upper (R)
matrix with qr(), NMF approximates non-negative matrices using
nmf(), Eigen decomposition extracts eigenvalues and eigenvectors
with eigen(), and SVD decomposes into U, , and V matrices.

5. Eigen values and Eigen vectors

Figure 5

0.0.5 Inference: Eigenvalues and eigenvectors are computed using eigen(A),


where eigenvalues represent scaling factors and eigenvectors indi-
cate directions of transformation.

Department of ECE 26 Sarang KP [AM.EN.U4EAC22061]


19EAC385 R Programming Lab

6. PCA: Covariance, Correlation matrix

Figure 6

0.0.6 Inference: PCA analyzes the covariance and correlation matrices


using cov() and cor() to identify relationships between variables,
while prcomp() performs dimensionality reduction to capture the
most important features of the data.

Department of ECE 27 Sarang KP [AM.EN.U4EAC22061]


19EAC385 R Programming Lab

Question
1. What is the application of linear algebra in data science
and machine learning
0.0.7 Ans: Linear algebra is widely used in data science and machine
learning for representing and processing data efficiently. Datasets
are often stored as matrices, and operations like matrix multipli-
cation and decomposition help in feature transformation, dimen-
sionality reduction (PCA), and optimization (gradient descent).
Algorithms like neural networks and support vector machines rely
heavily on vector spaces, matrix factorization, and eigenvalues to
extract meaningful patterns from data.

2. What are the challenges in learning linear algebra in data


science and machine learning?
0.0.8 Ans: Many concepts in linear algebra, such as eigenvalues, singu-
lar value decomposition (SVD), and vector spaces, can be abstract
and difficult to visualize, making them hard to grasp. Under-
standing how these concepts translate into real-world applications,
like recommendation systems or image recognition, requires both
mathematical intuition and coding skills. Additionally, working
with large datasets requires computational efficiency, which adds
another layer of complexity in implementing linear algebra tech-
niques in practical machine learning tasks.

Evaluation
Participation Knowledge Results Conduct Report Ethics Total

Name of the faculty:

Signature with date:

Department of ECE 28 Sarang KP [AM.EN.U4EAC22061]

You might also like