0% found this document useful (0 votes)
30 views34 pages

Mathematical Model

The document outlines the mathematical foundations necessary for data analysis, including topics such as convergence, sampling, linear regression, and classification methods. It emphasizes the importance of data analysis in decision-making and its applications in various fields like educational management and sports. The content is structured as a course guide, detailing methodologies and mathematical principles relevant to modern data analysis practices.

Uploaded by

Harsha Naik
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
30 views34 pages

Mathematical Model

The document outlines the mathematical foundations necessary for data analysis, including topics such as convergence, sampling, linear regression, and classification methods. It emphasizes the importance of data analysis in decision-making and its applications in various fields like educational management and sports. The content is structured as a course guide, detailing methodologies and mathematical principles relevant to modern data analysis practices.

Uploaded by

Harsha Naik
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 34

Introduction

Mathematical Foundation For Data Analysis

AKSHATHA K N
(P03NK21S0390)

Under the guidance of


Dr Harina P. Waghamore

BANGLORE UNIVERSITY
Department of Mathematics,
Jnana Bharathi Campus,
Bengaluru-560056,India.

December-2023

AKSHATHA K N (P03NK21S0390) Mathematical Foundation For Data Analysis


Introduction

Objective
1 Introduction
2 Convergences and Sampling
Probably Approximation correct
Concentration of Measure
3 Linear Regression
Polynomial Regression
Regularized Regression
4 Mathematical Classifications
Linear Classification
kNN classifiers
Neural Networks
5 Application and Impact
Data Analysis in Python
Educational Management System
Impact of Data Analysis on Football
6 Reference

AKSHATHA K N (P03NK21S0390) Mathematical Foundation For Data Analysis


Introduction

Introduction

What is Data analysis?


The process of scrutinizing raw data with the purpose of drawing
conclusion about that information is called "Data Analysis".
The main aim of data analysis is to convert the available cluttered data
into a format which is easy to understand, more legible, conclusive and
which supports the mechanism of decision-making.

Preface
This project is meant for use with a self-contained course that introduces
many basic principles and techniques needed for modern data analysis. In
particular, it was constructed from material taught mainly in two courses.
The first is an early undergraduate course which was designed to prepare
students to succeed in rigorous Machine Learning and Data Mining
courses. The second course is that advanced Data Mining course.

AKSHATHA K N (P03NK21S0390) Mathematical Foundation For Data Analysis


Introduction

Convergence and Sampling I

Sampling and Estimation

Most data analysis starts with some data set, called P. It will composed
of a set n data point P = {p1 , p2 , . . . , pn }. But this data comes in "iid"
from a fixed, call this as f .( "iid" mean: Identically and Independently
Distributed.)
Here we estimating the mean of f . Now lets introduce a random variable
X ∼ f : a hypothetical new data point. The mean of f is the expected
value of X : E[X ].
Now estimate
Pn the mean of f using the sample mean, defined as
P̄ = n1 i=1 pi . we consider n "iid" random variables {Xi } corresponding
to a setPof n independent observation {pi } , and take their average as
n
P̄ = n1 i=1 pi to estimate the mean of f .
1X ←− ∼
P̄ = pi realize {Xi } iid f.
n

AKSHATHA K N (P03NK21S0390) Mathematical Foundation For Data Analysis


Introduction

Convergence and Sampling II

Central Limit Theorem

Statement: Consider n iid random variables X1 , X2 , . . . , Xn where each


Xi ∼ f for any fixed
Pn distribution f with mean µ and bounded variance
σ 2 . Then X̄ = n1 i=1 Xi converges to the normal distribution with mean
µ = E[Xi ] and variance σ 2 /n.

Highlight and important consequences


For any f , then X̄ looks like a normal distribution.
The mean of the normal distribution, which is the expected value of
X̄ satisfies E[X̄ ] = µ, the mean of f . This implies that we can use X̄
as a guess for µ.
As n gets larger then the variance of X̄ decreases. So X̄ has the
right expected value it also has some error, this error is decreasing as
n increases.

AKSHATHA K N (P03NK21S0390) Mathematical Foundation For Data Analysis


Introduction

Convergence and Sampling III


Probably Approximation Correct (PAC)

The three most common concentration of measure bounds, which provide


increasingly strong bounds on the tails of distributions. Each provides a
PAC bound of the form:

Pr[|X̄ − E[X̄ ]| ≥ ϵ] ≤ δ .

This probability that X̄ is further than ϵ to its expected value is at most


δ.In practice there are a variety of tools.

Concentration of Measure

Markov Inequality: Let X be a random variable such that X ≥ 0, it


cannot take on negative values. Then for any parameter α > 0.

E[X ]
Pr[X > α] ≤ .
α

AKSHATHA K N (P03NK21S0390) Mathematical Foundation For Data Analysis


Introduction

Convergence and Sampling IV


Chebyshev Inequality: Now let X be a random variable where we know
Var[X ], and E[X ]. Then for any parameter ϵ > 0

Var[X ]
Pr[|X − E[X ]| ≤ ϵ] ≤ .
ϵ2
It is clearly is a PAC bound with δ = Var[X ]/ϵ2 . This bound is typically
stronger than the Markov one since δ decreases quadratically in ϵ instead
of linearly.

Chernoff-Hoeffding Inequality: The extension of the Chebyshev


inequality.
Pn Let consider a set n iid random variables X1 , X2 , . . . , Xn where
X̄ = n1 i=1 Xi . assume for each Xi lies in a bounded domain [b, t], and
let ∆ = t − b. Then for any parameter ϵ > 0

−2ϵ2 n
 
Pr[|X̄ − E[X̄ ]| > ϵ] ≥ 2exp .
∆2

Union bound: Consider a possibly dependent random events


AKSHATHA K N (P03NK21S0390) Mathematical Foundation For Data Analysis
Introduction

Convergence and Sampling V

Z1 , . . . , Zs . The probability that all events occur is at least


s
X
1− (1 − Pr[Zj ]) .
j=1

Importance sampling: There is large set of items A = {a1 , a2 , . . . , an },


and on sampling an item aj′ , its weight w (aj′ ), is reveled. Our goal is to
estimate w̄ = n1 a−i∈A P
P
w (ai ) This upper bound serves as an importance
n
ψi for each ai . Let Φ = i=1 ψi be the sum of all importance.

Improved concentration: To improve the concentration, the critical


Ψ
step is to analyze the range of each estimator nψ j
.w (aj′ ). Since we have
that w (aj′ ) ∈ [0, ψj ], then as a result
 
Ψ Ψ Ψ
.w (aj′ ) ∈ · [0, ψj ] = 0, .
nψj nψj n

AKSHATHA K N (P03NK21S0390) Mathematical Foundation For Data Analysis


Introduction

Convergence and Sampling VI

Now applying a Chenoff-Hoeffding bound , we can upper bound the


probability that wI has more than ϵ error with respect to w̄ .

−2ϵ2 k
 
Pr[|w̄ − ŵ | ≥ ϵ] ≤ 2exp = δ.
(Ψ/n)2

Priority Sampling
1. For item ai ∈ A with weight w (ai ) generate ui ∼ unif (0, 1]. Set
priority ρi = w (ai )/ui .
2. Let τ = ρ′k+1 , the (k + 1)th largest priority.
3. Assign new ( weights
max (w (ai ), τ ) if ρi > τ
w ′ (ai ) = .
0 if ρi ≤ τ

AKSHATHA K N (P03NK21S0390) Mathematical Foundation For Data Analysis


Introduction

Linear Regression I

Simple Linear Regression: The input is a set of n 2-dimensional data


points (X , y ) = {(x1 , y1 ), (x2 , y2 ), . . . , (xn , yn )}. The ultimate goal will be
predict the y values using only the x -values. In this case x is the
explanatory variable and y is the dependent variable.
In order to do this, we will "fit" a line through the data of the form

y = l(x ) = ax + b .

where a and b are parameters of this line. and l is model of this input
data.

Measuring error: For every value x ∈ R, we can predict a value


y = l(x ). Then on our data set, we can examine for xi how close yi . This
difference is called a residual

ri = |yi − ŷi | = |yi − l(xi )| .

AKSHATHA K N (P03NK21S0390) Mathematical Foundation For Data Analysis


Introduction

Linear Regression II

The residual measures the error of a single data point. The common
approach is the sum of squared errors (SSE)
n
X
SSE ((X , y ), l) = (yi − l(xi ))2 .
i=1

Solving for l : To solve for


Pnthe line which minimizesPn SSE ((X , y ), l).
Calculate averages x̄ = n1 i=1 xi and ȳ = n1 i=1 yi , and
create centered n-dimension vectors X̄ = (x1 − x̄ , x2 − x̄ , . . . , xn − x̄ ) for
all x -coordinates and Ȳ = (y1 − ȳ , y2 − ȳ , . . . , yn − ȳ ) for all
y -coordinates.
1 Set a = ⟨Ȳ , X̄ ⟩/∥X̄ ∥2 ,
2 Set b = ȳ − ax̄ .

AKSHATHA K N (P03NK21S0390) Mathematical Foundation For Data Analysis


Introduction

Linear Regression III

This defines l(x ) = ax + b .

Where b = ȳ − ax̄ ,
∥Ȳ ∥
a= cosθ .
∥X̄ ∥

Linear Regression with Multiple Explanatory Variables: Consider a


data set (X , y ) = {(x1 , y1 ), (x2 , y2 ), . . . , (xn , yn )} where each data point
has xi = (xi,1 , xi,2 , . . . , xi,d ) ∈ Rd and yi ∈ R. Where d is explanatory
variable, as the coordinates of xi , and one dependent variable in yi . Now
use all of these variables at once to make a single prediction about the
variable yi .

AKSHATHA K N (P03NK21S0390) Mathematical Foundation For Data Analysis


Introduction

Linear Regression IV

ŷi = Mα (xi ) = M(xi,1 , xi,2 , . . . , xi,d )


d
X
= α0 + αj xi,j
j=1

= α0 + α1 xi,1 + α2 xi,2 + · · · + αd xi,d


= ⟨α, (1, xi,1 , xi,2 , . . . , xi,d )⟩.

Polynomial Regression: Pp
ŷ = Mp (x ) = α0 + α1 x + α2 x 2 + · · · + αp x p = α0 + i=1 αi x i .
Measure error for a single data point (xi , yi ) as a residual as
ri = |ŷ − yi | = |Mα (xi ) − yi | and the error on n data points as the sum of
squared residuals
n
X n
X
SSE(P, Mα ) = ri2 = (Mp (xi ) − yi )2 .
i=1 i=1

AKSHATHA K N (P03NK21S0390) Mathematical Foundation For Data Analysis


Introduction

Linear Regression V
Then for n data point (x1 , y1 ), . . . , (xn .yn ) we can create an n × (p + 1)
data matrix
1 x1 x12 . . . x1p
   
y1
1 x2 x22 . . . x2p   y2 
X̂p =  . . y =.
   
.. . . .. 
 .. .. . . .   .. 
2 p
1 xn xn . . . xn yn

Regularized Regression

Gauss-Markov Theorem:Linear regression is in some sense optimal, as


formalized by the Gauss-markov theorem [5]. This states that, for data
(X , y ), the linear model M − α derived from α = (X T X )−1 X T y is the
best possible, conditioned on three things
best is defined as lowest variance in residuals,
the solution has 0 expected error,
all error εi = xi α − yi are not known to be correlated.
AKSHATHA K N (P03NK21S0390) Mathematical Foundation For Data Analysis
Introduction

Linear Regression VI

Tikhonov Regularization for Ridge Regression: Tikhonov


regularization results in a parameterized linear regression known as ridge
regression. Consider again a data set (X , y ) with n data points. Now
instead of minimizing α over
n
X
SSE(X , y , Mα ) = (yi − Mα (xi ))2 = ∥Xα − y ∥22 ,
i=1

we introduced a new cost function for a parameter s > 0 as


1
W◦ (X , y , α, s) = ∥Xα − y ∥22 + s∥α∥22 .
n
Lasso: A surprisingly effective variant of this regularization is known as
the lasso [3].It replaces the W◦ cost function, again using a parameter
s > 0 as
1
W⋄ (X , y , α, s) = ∥X α − y ∥22 + s∥α∥1 .
n

AKSHATHA K N (P03NK21S0390) Mathematical Foundation For Data Analysis


Introduction

Linear Regression VII

Orthogonal Matching Pursuit: There is no simple closed form solution


for lasso, but there is common approach towards finding a good solution
which retains its most important properties. And indeed running this
procedure orthogonal matching pursuit (OMP), does a good job of
revealing these properties.

Algorithm 3.6.1. Orthogonal Matching Pursuit


set r = y ; and αj = 0 ∀ j ∈ [d]
for i = 1 to k do
Set Xj = arg max Xj ′ ∈ X |⟨r , Xj ′ ⟩|
Set αj = arg minγ ∥r − Xjγ ∥2 + s|γ|
Set αj = r − Xj αj
Returns

AKSHATHA K N (P03NK21S0390) Mathematical Foundation For Data Analysis


Introduction

Mathematical Classifications I

Linear Classifiers: Our input here is a point set X ⊂ Rd , where each


element xi ∈ X also has an associated label yi . And yi ∈ −1, +1.
Assume each (x − i, yi ) ∼ µ; that is, each data point pair, is drawn iid
from some fixed but unknown distribution. Then our goal is a function
g : Rd → R, so that if yi = +1, then g(xi ) ≥ 0 and if yi = −1, then
g(xi ) ≤ 0.
Restrict that g is linear. For a data point x ∈ (x (1) , x (2) , . . . , x (d) ) we
enforce that
d
X
g(x ) = α0 + x (1) α1 + x (2) α2 + · · · + x (d) αd = α0 + x (1) αj .
j=1

Linear classification via linear regression: For each data


point(xi , yi ) ∈ Rd × R, we can immediately represent xi as the value of d
explanatory variables, and yi as the single dependent variable. Then we
can set up a n × (d + 1) matrix X̄ , where the ith row is (1, xi ); that is

AKSHATHA K N (P03NK21S0390) Mathematical Foundation For Data Analysis


Introduction

Mathematical Classifications II

the first coordinate is 1, and the next d coordinates come from the vector
xi . Then with a y ∈ Rn vector, we can solve for

α = (X̄ T X̄ )−1 X̄ T y ,

we have a set of d + 1 coefficients α = (α0 , α1 , . . . , αd ) describing a


linear function g : Rd → R defined

g(x ) = ⟨α, (1, x )⟩ .

Linear classification via gradient descent: The linear regression SSE


cost function is not the correct one, what is the correct one? We might
define a cost function ∆
n
X
∆(g, (X , y )) = (1 − 1(sign(yi ) = sign(g(xi ))) .
i=1

AKSHATHA K N (P03NK21S0390) Mathematical Foundation For Data Analysis


Introduction

Mathematical Classifications III


loss Functions: To use gradient descent for classifier learning, we will
use a proxy for ∆ called a loss functions L. These are sometimes implied
to be convex, and their goal is to approximate ∆.
n
X n
X
L(g, (X , y )) = li (g, (xi , yi )) = li (zi ).
i=1 i=1

Cross validation and Regularization Ultimately, in running gradient


descent for classification, one typically defines the overall cost function f
also using a regularization term r (α). For instance r (α) = ∥α∥2 is easy
to use and r (α) = ∥α∥1 induces sparsity, as discussed in the context of
regression. In general, thePregularizer r (α) is combined with a loss
n
function L(gα , (X , y )) = i=1 li (gα , (xi , yi )) as

f (α) = L(gα , (X , y )) + ηr (α) .

Perceptron Algorithm: The perceptron algorithm which explicitly uses


the linear structure of the problem.

AKSHATHA K N (P03NK21S0390) Mathematical Foundation For Data Analysis


Introduction

Mathematical Classifications IV
The algorithm: lets start with some normal direction w and then add
mis-classified points to w one at the time.
Algorithm 4.3.1. Perceptron(X , y )
Initialize w = yi xi ∀ (xi , yi ) ∈ (X , y )
repeat
For any (xi , yi ) such that yi ⟨xi , w ⟩ < 0 : update w ←− w + yi xi
until(repeat steps)
returns to w ←− w /∥w ∥

Kernels: For two vectors p = (p1 , . . . , pd ), x = (x1 , . . . , xd ) ∈ Rd , it


always used as the inner following non-linear functions
d
X
⟨p, x ⟩ = pi ẋi .
i=1

For instance, we can use the following non-linear functions


K (p, x ) = exp(∥ px ∥2 /σ 2 ) for the Gaussian kernel, with bandwidth σ,

AKSHATHA K N (P03NK21S0390) Mathematical Foundation For Data Analysis


Introduction

Mathematical Classifications V
K (p, x ) = exp(−∥px ∥/σ) for the Laplace kernel, with bandwidth σ,
K (p, x ) = (⟨p, x ⟩ + c)r for the polynomial kernel of power r , with
control parameter c > 0 .
kNN Classifiers: The kNN classifier means k-nearest neighbors
classifier. Which works as follows. Choose a scalar parameter k (say
k=5). Next define a majority function as maj : {−1, +1}k → {−1, +1}.
For a set Y = (y1 , y2 , . . . , yk ) ∈ {−1, +1}k it is defined
(
+1 if more than k/2 elements of Y are + 1
maj(y ) = .
−1 if more than k/2 elements of Y are − 1

Neural Networks: A neural network is a learning algorithm intuitively


based on how a neuron works in the brain. A neuron takes in a set of
inputs x = (x1 , x2 , . . . , xd ) ∈ Rd , weights each input by a corresponding
scalar w = (w1 , w2 , . . . , wd ) and "fires" a signal if the total weight
Pd
i=1 wi xi is greater than some threshold b.

AKSHATHA K N (P03NK21S0390) Mathematical Foundation For Data Analysis


Introduction

Mathematical Classifications VI
x1
x2
w1
x3 w2 Pd
{0, 1} j=1
wi ẋi − b = ⟨x , w ⟩ − b > 0?
w3 > b?
wd
xd

In a neural net, typically each xi and yi is restricted to a range [−1, 1] or


[0, 1] or [0, ∞], not just the two classes {−1, +1}. Since a linear function
does not guarantee this of its output, instead of a binary threshold, to
achieve this at output of each node, they typically add an activation
function ϕ(y ).
e y −e −y
hyperbolic tangent : ϕ(y ) = tanh(y ) = e y +ey ∈ [−1, 1] ,
y
1 e
sigmoid : ϕ(y ) = 1+e −y = e y +1 ∈ [0, 1] ,
ReLu : ϕ(y ) = max(0, y ) ∈ [0, ∞] .

AKSHATHA K N (P03NK21S0390) Mathematical Foundation For Data Analysis


Introduction

Application And Impact I

1.Data Analysis in Python

Data Analysis is mainly used in taking business decisions. Many libraries


are available for doing the analysis. For example, NumPy, Pandas,
Seaborn, Matplotlib, Sklearn, etc... .

Main Phases in Data Analysis


i Data requirements
v Exploratory data analysis
ii Data Collecting
vi Modeling and algorithms
iii Data processing
vii Data product
iv Data cleaning
Data Analysis in Python: The important libraries, the platforms, the
dataset to carry out the analysis will be introduced. Usage of various
python functions for numerical analysis are given along with various
methods of plotting graphs or charts are discussed.

AKSHATHA K N (P03NK21S0390) Mathematical Foundation For Data Analysis


Introduction

Application And Impact II


1 What is Python?

Python is a high-level, interpreted, multi-purpose programming


language. Many programming paradigms like procedural
programming language, object-oriented programming is supported in
python [1].
Open source and free
Interpreted language
Dynamic typesetting
Portable
Numerous IDE

2 Packages used
Numpy Seaborn
Pandas Matplotlib

AKSHATHA K N (P03NK21S0390) Mathematical Foundation For Data Analysis


Introduction

Application And Impact III

3 Platform used
Anaconda [4].

4 Dataset used
Importing dataset
A sample of dataset
Cleaning data
Importing libraries

5 Exploratory Data Analysis(EDA): In statistics, exploratory data


analysis is an approach of analyzing data sets to summarize their
main characteristics, often using statistical graphics and other data
visualization methods.
Data types
Describing the dataset
Correlations

AKSHATHA K N (P03NK21S0390) Mathematical Foundation For Data Analysis


Introduction

Application And Impact IV

6 Graphical EDA (GEDA): Fundamentally, graphical exploratory


data analysis is the graphical equivalent to conventional
non-graphical exploratory data analysis.
B. Multivariate GEDA
A. Univariate GEDA Scatter plot
Histogram Heat Maps
Stem Plot Count Plot
Box Plot

Conclusion: Various phases of data analysis including data collection,


cleaning and analysis are discussed briefly. Explorative data analysis is
mainly studied here. For the implementation, Python programming
language is used.

AKSHATHA K N (P03NK21S0390) Mathematical Foundation For Data Analysis


Introduction

Application And Impact V

2.Educational Management System

1 Educational data analysis is used to study the data available in the


educational field and bring out the hidden knowledge from it.
Analysis is a process of discovering, analyzing, and interpreting
meaningful patterns from large amounts of data.
2 Data analysis relies on the techniques of data mining such as,
classification, association, correlation, categorization, prediction,
estimation, clustering, trend analysis and visualization.
3 Predictive analysis in educational management system
Data collection Model Validation
Build the predictive model Analytic enabled decision making .

AKSHATHA K N (P03NK21S0390) Mathematical Foundation For Data Analysis


Introduction

Application And Impact VI

Figure: Stakeholders of Educational Management System.

AKSHATHA K N (P03NK21S0390) Mathematical Foundation For Data Analysis


Introduction

Application And Impact VII

International status of educational Data Analysis

Educational data analysis techniques can reveal useful information to


educators to help them design or modify the structure of courses.
Students can also facilitate their studies using the discovered
knowledge.
Learning analysis focuses on collecting and analyzing data from
different sources to provide information related to teaching and
learning. This helps educational institutions to improve their quality
of learning.
Santa Monica College’s Glass classroom initiative aims to enhance
students and teachers performance by collecting and analyzing large
amounts of data [2].

AKSHATHA K N (P03NK21S0390) Mathematical Foundation For Data Analysis


Introduction

Application And Impact VIII

A Survey of work done in the Research area and the Need for
more research

Figure: Parameters considered for Students Performance.

AKSHATHA K N (P03NK21S0390) Mathematical Foundation For Data Analysis


Introduction

Application And Impact IX


3.The Impact of Data Analysis on Football

Data used in Football: Football is a results-driven business. if a player


is not performing well, they will soon be moved on to another club.
Similarly, if a manager is not getting the results the fans demand, they
will soon find themselves looking for another job. Its cutthroat industry
with all involved parties constantly looking for new ways through which
they can gain an advantage over their competitors. Data analysis offers
just that it can be used by clubs and football organisations to gain a
deeper understanding of players, performances and results, and make
changes to improve these as they move forward.

How is this Data Implemented?


Recruitment is an integral part of success. If clubs don’t get the right
players for the right positions, they will never be able to achieve their
goals and objectives. By using data, clubs can identify exactly what they
need, whether that be a defender who can win aerial duels or a winger

AKSHATHA K N (P03NK21S0390) Mathematical Foundation For Data Analysis


Introduction

Application And Impact X

who can create goal-scoring opportunities. They can then look over lists
of potential recruits and check to see whether their stats match what
they’re looking for.

What has the Impact been?


Through the adoption and integration of data analysis, football is
becoming more and more like a science. Clubs now have huge teams
dedicated to data collection and analysis and the work they do directly
informs decisions made by the clubs managerial and training staff. The
game has become more competitive and more precise. Clubs need to
make the perfect decisions every time if they want any chance of lifting a
trophy.

AKSHATHA K N (P03NK21S0390) Mathematical Foundation For Data Analysis


Introduction

References

Guido Van Rossum et al. Python programming language. In USENIX


annual technical conference, (2007).
L. Johnston, “The Glass Classroom,” blog; http://glassclass.
room.blogspot.ca/2012/12/the-glassclassroom-big-data.html (1 Dec
2012).
Osborne, Michael R., Brett Presnell, and Berwin A. Turlach. "On the
lasso and its dual." Journal of Computational and Graphical
statistics:pp. 319-337 9.2 (2000).
Rolon-Mérette, Damien, et al. "Introduction to Anaconda and
Python: Installation and setup." Quant. Methods Psychol:pp.
S3-S11 16.5 (2016).
Shaffer, Juliet Popper. "The Gauss—Markov theorem and random
regressors." The American Statistician:pp. 269-273 45.4 (1991).

AKSHATHA K N (P03NK21S0390) Mathematical Foundation For Data Analysis


Introduction

THANK YOU

AKSHATHA K N (P03NK21S0390) Mathematical Foundation For Data Analysis

You might also like