0% found this document useful (0 votes)
13 views27 pages

ML Khuraim

Uploaded by

rooshansalim2
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views27 pages

ML Khuraim

Uploaded by

rooshansalim2
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 27

Department of Computer Science & Engineering

School of Engineering Sciences & Technology

JAMIA HAMDARD, New Delhi


(Deemed to be University, NAAC A+)

Machine Learning Lab

B.Tech(CSEAI)- 5th Semester/BTCSEAI 506

Name of the faculty:Dr. Kaveri Umesh Kadam

Submitted by: Ahmad Khuraim

Enrolment No: 2022-350-004

Section: AI(Artificial Intelligence)

pg. 1
2022-350-004
CONTENTS
S. NO. LIST OF EXPERIMENTS PAGE NO. REMARKS
1 Develop a cost function of linear 3
regression using sample data
2 Develop a gradient descent of linear 11
regression using sample data
3 Implement a simple linear regression 15
algorithm using simple data
4 Implement logistic regression 20
algorithm using simple data

.Implement neural network using


simple data
5
.Activation function in neural 26

network [Sigmoid,hyperbolic tan


function(tanh),RELU]
Develop regularization in already
6 develop logistic regression 37
algorithm.
7 Calculate bias and varience for already 40
develop algorithm.
8 Calculate error for already develop 46
algorithm.
9 Implement K-MEANS algorithm 51
using sample data.
10 Develop PCA based on sample data. 61

pg. 2
2022-350-004
Cost Function of Linear Regression

pg. 3
2022-350-004
TITLE – ASSIGNMENT : 1

AIM: Develop a cost function of linear regression using sample data


THEORY:
COST FUNCTION

Cost function measures the performance of a machine learning model for given data. Cost
function quantifies the error between predicted and expected values and present that error
in the form of a single real number. Depending on the problem, cost function can be
formed in many different ways. The purpose of cost function is to be either:

 Minimized: The returned value is usually called cost, loss or error. The goal is to find the values
of model parameters for which cost function return as small a number as possible.
 Maximised: In this case, the value is yields is named a reward. The goal is to find values of
model parameters for which the returned number is as large as possible.

Mean squared error is one of the most commonly used and earliest explained
regression metrics. MSE represents the average squared difference between the
predictions and expected results. In other words, MSE is an alteration of MAE
where, instead of taking the absolute value of differences, we square those
differences.

In MAE, the partial error values were equal to the distances between points in the
coordinate system. Regarding MSE, each partial error is equivalent to the area of
the square created out of the geometrical distance between the measured points. All
regional areas are summed up and averaged.

pg. 4
2022-350-004
We can write the MSE formula like this:

 i= index of sample
 ŷ= predicted value
 y= expected value
 m = number of samples in the dataset

pg. 5
2022-350-004
PROGRAM WITH OUTPUT:

pg. 6
2022-350-004
pg. 7
2022-350-004
pg. 8
2022-350-004
pg. 9
2022-350-004
pg. 10
2022-350-004
Gradient Descent of Linear Regression

pg. 11
2022-350-004
TITLE – ASSIGNMENT : 2

AIM: Develop a gradient descent of linear regression using sample data.


THEORY:

Gradient Descent

Gradient Descent is an algorithm that finds the best-fit line for a given training
dataset in a smaller number of iterations.

If we plot m and c against MSE, it will acquire a bowl shape (As shown in the
diagram below)

For some combination of m and c, we will get the least Error (MSE). That
combination of m and c will give us our best fit line.

pg. 12
2022-350-004
The algorithm starts with some value of m and c (usually starts with m=0, c=0).
We calculate MSE (cost) at point m=0, c=0. Let say the MSE (cost) at m=0, c=0 is
100. Then we reduce the value of m and c by some amount (Learning Step). We
will notice a decrease in MSE (cost). We will continue doing the same until our
loss function is a very small value or ideally 0 (which means 0 error or 100%
accuracy).

PROGRAM WITH OUTPUT:

pg. 13
2022-350-004
pg. 14
2022-350-004
Linear Regression

pg. 15
2022-350-004
TITLE – ASSIGNMENT : 3

AIM: Implement linear regression algorithm using sample data.


THEORY:

Linear Regression

Linear regression is one of the easiest and most popular Machine Learning
algorithms. It is a statistical method that is used for predictive analysis. Linear
regression makes predictions for continuous/real or numeric variables such as
sales, salary, age, product price, etc.

Linear regression algorithm shows a linear relationship between a dependent (y)


and one or more independent (y) variables, hence called as linear regression. Since
linear regression shows the linear relationship, which means it finds how the value
of the dependent variable is changing according to the value of the independent
variable.

The linear regression model provides a sloped straight line representing the
relationship between the variables. Consider the below image:

pg. 16
2022-350-004
Mathematically, we can represent a linear regression as:

y= a0+a1x+ ε

Here,

Y= Dependent Variable (Target Variable)


X= Independent Variable (predictor Variable)
a0= intercept of the line (Gives an additional degree of freedom)
a1 = Linear regression coefficient (scale factor to each input value).
ε = random error

The values for x and y variables are training datasets for Linear Regression model
representation.

PROGRAM WITH OUTPUT:

pg. 17
2022-350-004
pg. 18
2022-350-004
pg. 19
2022-350-004
Logistic Regression

pg. 20
2022-350-004
TITLE – ASSIGNMENT : 4

AIM: Implement logistic regression algorithm using sample data.


THEORY:

Logistic Regression
 Logistic regression is one of the most popular Machine Learning algorithms,
which comes under the Supervised Learning technique. It is used for
predicting the categorical dependent variable using a given set of
independent variables.
 Logistic regression predicts the output of a categorical dependent variable.
Therefore the outcome must be a categorical or discrete value. It can be
either Yes or No, 0 or 1, true or False, etc. but instead of giving the exact
value as 0 and 1, it gives the probabilistic values which lie between 0 and
1.
 Logistic Regression is much similar to the Linear Regression except that
how they are used. Linear Regression is used for solving Regression
problems, whereas Logistic regression is used for solving the
classification problems.
 In Logistic regression, instead of fitting a regression line, we fit an "S"
shaped logistic function, which predicts two maximum values (0 or 1).
 The curve from the logistic function indicates the likelihood of something
such as whether the cells are cancerous or not, a mouse is obese or not based
on its weight, etc.
 Logistic Regression is a significant machine learning algorithm because it
has the ability to provide probabilities and classify new data using
continuous and discrete datasets.
 Logistic Regression can be used to classify the observations using different
types of data and can easily determine the most effective variables used for
the classification. The below image is showing the logistic function:

pg. 21
2022-350-004
PROGRAM WITH OUTPUT:

pg. 22
2022-350-004
pg. 23
2022-350-004
pg. 24
2022-350-004
pg. 25
2022-350-004

You might also like