ML Manual - 2023-24
ML Manual - 2023-24
B.E. Semester 7
(Electronics and Communication)
Year : 2023-24
Vishwakarma Government Engineering College, Chandkheda
Certificate
Place: __________
Date: __________
Preface
Main motto of any laboratory/practical/field work is for enhancing required skills as well as
creating ability amongst students to solve real time problem by developing relevant
competencies in psychomotor domain. By keeping in view, GTU has designed competency
focused outcome-based curriculum for engineering degree programs where sufficient weightage
is given to practical work. It shows importance of enhancement of skills amongst the students
and it pays attention to utilize every second of time allotted for practical amongst students,
instructors and faculty members to achieve relevant outcomes by performing the experiments
rather than having merely study type experiments. It is must for effective implementation of
competency focused outcome-based curriculum that every practical is keenly designed to serve
as a tool to develop and enhance relevant competency required by the various industry among
every student. These psychomotor skills are very difficult to develop through traditional chalk
and board content delivery method in the classroom. Accordingly, this lab manual is designed
to focus on the industry defined relevant outcomes, rather than old practice of conducting
practical to prove concept and theory.
By using this lab manual students can go through the relevant theory and procedure in advance
before the actual performance which creates an interest and students can have basic idea prior to
performance. This in turn enhances pre-determined outcomes amongst students. Each
experiment in this manual begins with competency, course outcomes as well as practical
outcomes (objectives). The students will also achieve safety and necessary precautions to be
taken while performing practical.
This manual also provides guidelines to faculty members to facilitate student centric lab
activities through each experiment by arranging and managing necessary resources in order that
the students follow the procedures with required safety and necessary precautions to achieve the
outcomes. It also gives an idea that how students will be assessed by providing rubrics.
This lab manual is focuses on the development of Computer Programs for machine learning that
can change when exposed to new data. In this manual we’ll see basics of Machine Learning, and
implementation of a simple machine-learning algorithm using python.
Machine learning is a method of teaching computers to learn from data, without being explicitly
programmed. Python is a popular programming language for machine learning because it has a
large number of powerful libraries and frameworks that make it easy to implement machine
learning algorithms.
To get started with machine learning using Python, you will need to have a basic understanding of
Python programming and some knowledge of mathematical concepts such as probability, statistics,
and linear algebra.
Utmost care has been taken while preparing this lab manual however always there is chances of
improvement. Therefore, we welcome constructive suggestions for improvement and removal
of errors if any.
Introduction to Machine Learning (3171114)
Sr. CO CO CO CO CO
Aim / Objective(s) of Experiment
No. 1 2 3 4 5
The following industry relevant competency are expected to be developed in the student by
undertaking the practical work of this laboratory.
1.
2.
Index
(Progressive Assessment Sheet)
Experiment No: 1
Date:
TensorFlow Library:
Tensor Flow is at present the most popular software library. There are several real-
world applications of deep learning that makes Tensor Flow popular. Being an Open-
Source library for deep learning and machine learning, Tensor Flow finds a role to
play in text-based applications, image recognition, voice search, and many more.
Deep Face, Facebook’s image recognition system, uses Tensor Flow for image
recognition. Every Google app that you use has made good use of Tensor Flow to
make your experience better.
Tensor flow’s name is directly derived from its core component: A tensor is a vector
or matrix of n-dimensions that represents all types of Tensor data. A tensor is a
vector/matrix of n-dimensions representing types of data. Values in a tensor are of
identical data types with a known shape, and this shape is the dimensionality of the
matrix. A vector is a one-dimensional tensor; a matrix is a two-dimensional tensor.
Here is the first example of tensor Flow. It shows how you can define constants and
perform computation with those constants using the session.
Introduction to Machine Learning (3171114)
# Import `tensorflow`
import tensorflow as tf
# Multiply
result = tf.multiply(x1, x2)
Scikit-learn Library:
Scikit-learn is an open source machine learning library that supports supervised and
unsupervised learning. It also provides various tools for model fitting, data
preprocessing, model selection, model evaluation, and many other utilities. Scikit-
learn provides dozens of built-in machine learning algorithms and models,
called estimators. Each estimator can be fitted to some data using its fit method.
The library is built upon the SciPy (Scientific Python) that must be installed before
you can use scikit-learn. This stack that includes:
Extensions or modules for SciPy care conventionally named SciKits. As such, the
module provides learning algorithms and is named scikit-learn.
Pytorch library:
A replacement for NumPy to use the power of GPUs and other accelerators.
An automatic differentiation library that is useful to implement neural networks.
PyTorch is closely related to the lua-based Torch framework which is actively used in
Facebook.
Easy Interface: PyTorch offers easy to use API; hence it is considered to be very
simple to operate and runs on Python. The code execution in this framework is quite
easy.
Python usage: This library is considered to be Pythonic which smoothly integrates
with the Python data science stack. Thus, it can leverage all the services and
functionalities offered by the Python environment.
Computational graphs: PyTorch provides an excellent platform which offers
dynamic computational graphs. Thus a user can change them during runtime. This is
highly useful when a developer has no idea of how much memory is required for
creating a neural network model.
Pytorch Tensors: Tensors are a specialized data structure that are very similar to
arrays and matrices. Tensors are similar to NumPy’s ndarrays, except that tensors can
run on GPUs or other specialized hardware to accelerate computing.
Numpy is a popular open source library used for mathematical and scientific
computing in Python. Instead of reinventing the wheel, Pytorch interpolates really
well with Numpy to leverage its existing ecosystem of tools and libraries.
Introduction to Machine Learning (3171114)
import numpy as np
x = np. array ( [ [1, 2], [3, 4.]])
>> x
array ( [ [1 . , 2.],
[3., 4. ]])
Rubrics:
Experiment No: 2
Date:
Relevant CO: Comprehend basic concepts of neural network and its use in machine
learning.
Objectives:
Code:
import numpy as np
from sklearn.model_selection import train_test_split
# output classification
out=np.array([0,1,1,0,1,0,0,1,1,0,1,0])
#sklearn.model_selection.train_test_split(*arrays,**options)>list
x_train,x_test,y_train,y_test=train_test_split(inp,out,test_size=
3,random_state=4)
Output:
Rubrics:
Rubrics 1 2 3 4 5
Introduction to Machine Learning (3171114)
Experiment No: 3
Relevant CO: Learn and implement various basic machine learning algorithms.
Exploratory Data Analysis (EDA) is a technique to analyze data using some visual
Techniques. With this technique, we can get detailed information about the statistical
summary of the data. We will also be able to deal with the duplicates values, outliers,
and also see some trends or patterns present in the dataset.
Iris Dataset
Iris Dataset is considered as the Hello World for data science. It contains five columns
namely – Petal Length, Petal Width, Sepal Length, Sepal Width, and Species Type.
You can download the Iris.csv file from the above link. Now we will use the Pandas
library to load this CSV file, and we will convert it into
the dataframe. read_csv() method is used to read CSV files.
Example:
import pandas as pd
Example:
The describe() function applies basic statistical computations on the dataset like
extreme values, count of data points standard deviation, etc. Any missing value or
NaN value is automatically skipped. describe() function gives a good picture of the
distribution of data.
Example:
df.describe()
df.isnull().sum()
Introduction to Machine Learning (3171114)
Checking Duplicates
Let’s see if our dataset contains any duplicates or not.
Pandas drop_duplicates() method helps in removing duplicates from the data frame.
Example:
data=df.drop_duplicates(subset
="Species",)
data
We can see that there are only three unique species. Let’s see if the dataset is balanced
or not i.e. all the species contain equal amounts of rows or not. We will use
the Series.value_counts() function. This function returns a Series containing counts of
unique values.
Example:
df.value_counts("Species")
Data Visualization
We will use Matplotlib and Seaborn library for the data visualization.
Example:
# importing packages
import seaborn as sns
import matplotlib.pyplot as plt
sns.countplot(x='Species', data=df, )
plt.show()
Introduction to Machine Learning (3171114)
We will see the relationship between the sepal length and sepal width and also
between petal length and petal width.
# importing packages
import seaborn as sns
import matplotlib.pyplot as plt
sns.scatterplot(x='SepalLengthCm',
y='SepalWidthCm',
hue='Species',
data=df, )
# Placing Legend outside the Figure
plt.legend(bbox_to_anchor=(1, 1),
loc=2)
plt.show()
# importing packages
import seaborn as sns
import matplotlib.pyplot as plt
sns.scatterplot(x='PetalLengthCm',y='Pe
talWidthCm',hue='Species', data=df, )
# Placing Legend outside the Figure
plt.legend(bbox_to_anchor=(1, 1),
loc=2)
plt.show()
Let’s plot all the column’s relationships using a pairplot. It can be used for
multivariate analysis.
Example:
# importing packages
import seaborn as sns
import matplotlib.pyplot
as plt
sns.pairplot(df.drop(['Id'
], axis = 1),
hue='Species'
, height=2)
Histograms
Histograms allow seeing the distribution of data for various columns. It can be used
for uni as well as bi-variate analysis.
Example:
# importing packages
import seaborn as sns
import matplotlib.pyplot as plt
axes[0,0].set_title("Sepal Length")
axes[0,0].hist(df['SepalLengthCm'],
bins=7)
axes[0,1].set_title("Sepal Width")
axes[0,1].hist(df['SepalWidthCm'],
bins=5);
axes[1,0].set_title("Petal Length")
axes[1,0].hist(df['PetalLengthCm'],
bins=6);
axes[1,1].set_title("Petal Width")
axes[1,1].hist(df['PetalWidthCm'],
bins=6);
Introduction to Machine Learning (3171114)
Distplot is used basically for the univariant set of observations and visualizes it
through a histogram i.e. only one observation and hence we choose one particular
column of the dataset.
Example:
# importing packages
import seaborn as sns
import matplotlib.pyplot as plt
plot = sns.FacetGrid(df,
hue="Species")
plot.map(sns.distplot,
"SepalLengthCm").add_legend()
plot = sns.FacetGrid(df,
hue="Species")
plot.map(sns.distplot,
"SepalWidthCm").add_legend()
plot = sns.FacetGrid(df,
hue="Species")
plot.map(sns.distplot,
"PetalLengthCm").add_legend()
plot = sns.FacetGrid(df,
hue="Species")
plot.map(sns.distplot,
"PetalWidthCm").add_legend()
plt.show()
Introduction to Machine Learning (3171114)
Handling Correlation
Pandas dataframe.corr() is used to find the pairwise correlation of all columns in the
dataframe. Any NA values are automatically excluded. For any non-numeric data type
columns in the dataframe it is ignored.
Example:
data.corr(method='pearso
n')
Heatmaps
The heatmap is a data visualization technique that is used to analyze the dataset as
colors in two dimensions. Basically, it shows a correlation between all numerical
variables in the dataset. In simpler terms, we can plot the above-found correlation
using the heatmaps.
Example:
# importing packages
import seaborn as sns
import matplotlib.pyplot as plt
sns.heatmap(df.corr(method='pearson').drop(
['Id'], axis=1).drop(['Id'], axis=0),
annot = True);
plt.show()
Introduction to Machine Learning (3171114)
Box Plots
We can use boxplots to see how the categorical value os distributed with other
numerical values.
Example:
# importing packages
import seaborn as sns
import matplotlib.pyplot
as plt
def graph(y):
sns.boxplot(x="Speci
es", y=y, data=df)
plt.figure(figsize=(10,1
0))
plt.subplot(222)
graph('SepalWidthCm')
plt.subplot(223)
graph('PetalLengthCm')
plt.subplot(224)
graph('PetalWidthCm')
plt.show()
Handling Outliers
An Outlier is a data-item/object that deviates significantly from the rest of the (so-
called normal)objects. They can be caused by measurement or execution errors. The
analysis for outlier detection is referred to as outlier mining.
There are many ways to detect the outliers, and the removal process is the data frame
same as removing a data item from the panda’s dataframe.
Let’s consider the iris dataset and let’s plot the boxplot for the SepalWidthCm column.
Example:
# importing packages
import seaborn as sns
import matplotlib.pyplot as
plt
sns.boxplot(x='SepalWidthCm',
data=df)
In the above graph, the values above 4 and below 2 are acting as outliers.
Removing Outliers
For removing the outlier, one must follow the same process of removing an entry
from the dataset using its exact position in the dataset because in all the above
methods of detecting the outliers end result is the list of all those data items that
satisfy the outlier definition according to the method used.
Example:
We will detect the outliers using IQR and then we will remove them. We will also
draw the boxplot to see if the outliers are removed or not.
Introduction to Machine Learning (3171114)
# Importing
import sklearn
from sklearn.datasets import load_boston
import pandas as pd
import seaborn as sns
Q3 = np.percentile(df['SepalWidthCm'], 75,
interpolation = 'midpoint')
IQR = Q3 - Q1
print("Old Shape: ", df.shape)
# Upper bound
upper = np.where(df['SepalWidthCm'] >= (Q3+1.5*IQR))
# Lower bound
lower = np.where(df['SepalWidthCm'] <= (Q1-1.5*IQR))
# Removing the Outliers
df.drop(upper[0], inplace = True)
df.drop(lower[0], inplace = True)
print("New Shape: ", df.shape)
sns.boxplot(x='SepalWidthCm', data=df)
1. What is EDA?
2. Write a code for comparing Sepal length and sepal width.
3. Expalain Histogram.
4. Define the following function:(I) Heatmaps (ii)Box plot (iii)describe (iv)
checking duplicates.
5. Explain library used for data visualization
Introduction to Machine Learning (3171114)
Relevant CO: Learn and implement various basic machine learning algorithms.
Objectives:
1. Implement simple linear regression algorithm
2. Plot Regression line.
3. Find coefficient of linear regression
Code:
import numpy as np
import matplotlib.pyplot as plt
def main():
# observations / data
x = np.array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9])
y = np.array([1, 3, 2, 5, 7, 8, 8, 9, 10, 12])
# estimating coefficients
b = estimate_coef(x, y)
print("Estimated coefficients:\nb_0 = {} \
\nb_1 = {}".format(b[0], b[1]))
if __name__ == "__main__":
main()
Output:
Estimated coefficients:
b_0 = 1.2363636363636363
b_1 = 1.1696969696969697
Relevant CO: Learn and implement various basic machine learning algorithms.
Objective:
1. Implement simple linear regression algorithm.
2. Display scatter plot of linear regression.
3. Find coefficient, mean squared error and variance score of linear regression
Code:
#Next, we will load the diabetes dataset and create its object −
diabetes = datasets.load_diabetes()
X = diabetes.data[:, np.newaxis, 2]
#Next, we need to split the data into training and testing sets a
s follows −
X_train = X[:-30]
X_test = X[-30:]
#Next, we need to split the target into training and testing sets
as follows −
y_train = diabetes.target[:-30]
y_test = diabetes.target[-30:]
regr = linear_model.LinearRegression()
Output:
Coefficients:
[941.43097333]
Mean squared error: 3035.06
Variance score: 0.41
Experiment No: 5
Relevant CO: Learn and implement various basic machine learning algorithms.
Objective:
1. Implement multiple linear regression algorithm.
2. Display scatter plot of multiple linear regression.
3. Find difference between actual and predicted value.
Code:
# import libraries
import pandas as pd
import numpy as np
# Import dataset
data_df=pd.read_csv('/content/drive/MyDrive/test.csv')
# if you are using python rather than Google co-lab you can
upload csv file from your computer)
data_df.head()
# define x and y
x=data_df.drop(['PE'],axis=1).values
y=data_df['PE'].values
print(x)
print(y)
pred_y_df[0:20]
Output:
Experiment No: 6
Decision Tree Algorithm
Date:
Relevant CO: Learn and implement various basic machine learning algorithms.
Objective:
1. Implement Decision tree algorithm on IRIS datasets.
2. Plot tree.
3. Find classification report, confusion matrix and accuracy.
Code:
#Import library
import numpy as np
from sklearn.tree import DecisionTreeClassifier
from sklearn.model_selection import train_test_split
import pandas as pd
import matplotlib.pyplot as plt
from sklearn import tree
#load file
#col_names=['Id','SepalLengthCm','SepalWidthCm','PetalLengthCm','
PetalWidthCm','PetalWidthCm','Species']
iris=pd.read_csv('/content/drive/MyDrive/finalIris.csv')
iris.head()
feature_cols=['Id','SepalLengthCm','SepalWidthCm','PetalLengthCm'
,'PetalWidthCm','PetalWidthCm']
x=iris[feature_cols]
y=iris.Species
x.head()
# Next, we will divide the data into train and test split.
#The following code will split the dataset into 70% training data
and 30% of testing data −
x_train,x_test,y_train,y_test=train_test_split(x,y,test_size=0.7,
random_state=0)
y_pred=clf.predict(x_test)
result=confusion_matrix(y_test,y_pred)
print("Confusion Matrix:",)
print(result)
result1=classification_report(y_test,y_pred)
print("Classification Report:",)
print(result1)
result2=accuracy_score(y_test,y_pred)
print("Accuracy:",)
print(result2)
Output:
Experiment No: 7
Logistic Regression Algorithm
Date:
Relevant CO: Learn and implement various basic machine learning algorithms.
Objective:
1. Implement Logistic regression algorithm.
2. Find classification report, confusion matrix and accuracy.
Code:
# import libraries
#get data
x=np.arange(10).reshape(-1,1)
y=np.array([0,1,0,0,1,1,1,1,1,1])
model=LogisticRegression(solver='liblinear',C=10.0,random_state=0)
model.fit(x,y)
#evaluate model
p_pred=model.predict_proba(x)
y_pred=model.predict(x)
score=model.score(x,y)
conf_m=confusion_matrix(y,y_pred)
report=classification_report(y,y_pred)
print('x:',x,sep='\n')
print('y:',y,sep='\n',end='\n\n')
print('intercept:',model.intercept_)
print('coef:',model.coef_,end='\n\n')
print('y_pred:',y_pred,end='\n\n')
print('score:',score,end='\n\n')
print('conf_m:', conf_m,sep='\n',end='\n\n')
print('report:,',report,sep='\n')
Introduction to Machine Learning (3171114)
Output:
x:
[[0]
[1]
[2]
[3]
[4]
[5]
[6]
[7]
[8]
[9]]
y:
[0 1 0 0 1 1 1 1 1 1]
intercept: [-1.51632619]
coef: [[0.703457]]
y_pred: [0 0 0 1 1 1 1 1 1 1]
score: 0.8
conf_m:
[[2 1]
[1 6]]
report:,
precision recall f1-score support
accuracy 0.80 10
macro avg 0.76 0.76 0.76 10
weighted avg 0.80 0.80 0.80 10
Experiment No: 8
SVM Classifier using Linear Kernel
Date:
Objective:
1. Implement SVM classifier using linear kernel
2. Plot scatter plot for classification of IRIS data
Code:
# importing files
import pandas as pd
import numpy as np
from sklearn import svm,datasets
import matplotlib.pyplot as plt
x=iris.data[:, :2]
y=iris.target
#svm boundries
x_min,x_max=x[:,0].min()-1,x[:,0].max()+1
y_min,y_max=x[:,1].min()-1,x[:,1].max()+1
h=(x_max/x_min)/100
#regularization parameter
C=1.0
plt.ylabel('sepal width')
plt.xlim(xx.min(),xx.max())
plt.title('support vector classfier with linear kernal')
Output:
Experiment No: 9
Objective:
1. Implement SVM classifier using rbf kernel
2. Plot scatter plot for classification of IRIS data
Code:
# importing files
import pandas as pd
import numpy as np
from sklearn import svm,datasets
import matplotlib.pyplot as plt
x=iris.data[:, :2]
y=iris.target
#svm boundries
x_min,x_max=x[:,0].min()-1,x[:,0].max()+1
y_min,y_max=x[:,1].min()-1,x[:,1].max()+1
h=(x_max/x_min)/100
#regularization parameter
C=1.0
svc_classifier=svm.SVC(kernel='rbf',gamma='auto',C=C).fit(x,y)
z=svc_classifier.predict(x_plot)
z=z.reshape(xx.shape)
plt.figure(figsize=(15,5))
plt.subplot(121)
plt.contour(xx,yy,z,cmap=plt.cm.tab10,alpha=0.3)
plt.scatter(x[:,0],x[:,1],c=y,cmap=plt.cm.Set1)
plt.xlabel('sepal length')
plt.ylabel('sepal width')
Introduction to Machine Learning (3171114)
plt.xlim(xx.min(),xx.max())
plt.title('support vector classfier with rbf kernal')
Output:
Experiment No: 10
Principal Component Analysis Algorithm
Date:
Relevant CO: Study dimensionality reduction concept and its role in machine
learning techniques.
Objective:
1. Implement dimensionality reduction using PCA.
2. Learn standard scalar and plot scaled output.
Code:
#Applying PCA
#Taking no. of Principal Components as 3
pca = PCA(n_components = 3)
pca.fit(scaled_data)
data_pca = pca.transform(scaled_data)
data_pca = pd.DataFrame(data_pca,columns=['PC1','PC2','PC3'])
data_pca.head()
Output:
sepal length (cm) sepal width (cm) petal length (cm) petal width (cm)
0 1 2 3
PC1PC2PC30-2.2647030.480027-0.1277061-2.080961-0.674134-0.2346092-2.364229-
0.3419080.0442013-2.299384-0.5973950.0912904-2.3898420.6468350.015738
Experiment No: 11
Relevant CO: Study dimensionality reduction concept and its role in machine
learning techniques.
Objective:
1. Implement random forest algorithm using python.
2. Visualize important feature on bar plot.
3. Find accuracy of model.
Code:
y_pred=clf.predict(X_test)
import pandas as pd
feature_imp = pd.Series(clf.feature_importances_,index=iris.featu
re_names).sort_values(ascending=False)
feature_imp
Output:
Accuracy: 0.9555555555555556
Introduction to Machine Learning (3171114)
Experiment No: 12
K means Clustering
Date:
Objective:
1. Implement K-Means Clustering using IRIS Dataset.
2. To find frequency distribution of species using pair plot.
3. Visualize correlation using a heat map.
Code:
#import libraries
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn.cluster import KMeans
from sklearn.metrics import silhouette_score
from sklearn.preprocessing import MinMaxScaler
#Reading Dataset
iris = pd.read_csv("/content/drive/MyDrive/finalIris.csv")
x = iris.iloc[:, [0, 1, 2, 3]].values
#information of datatypes
iris.info()
iris[0:10]
sns.pairplot(iris,hue="Species",size=3)
#we can visualise this correlation using a heatmap.
Output:
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 150 entries, 0 to 149
Data columns (total 6 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 Id 150 non-null int64
1 SepalLengthCm 150 non-null float64
2 SepalWidthCm 150 non-null float64
3 PetalLengthCm 150 non-null float64
4 PetalWidthCm 150 non-null float64
5 Species 150 non-null object
dtypes: float64(4), int64(1), object(1)
memory usage: 7.2+ KB
I SepalLengthC SepalWidthC PetalLengthC PetalWidthC
Species
d m m m m
1
9 4.9 3.1 1.5 0.1 Iris-setosa
0
Introduction to Machine Learning (3171114)
Introduction to Machine Learning (3171114)
Experiment No: 13
Naive Bayes Algorithm
Date:
Objective:
1. Implement Naive Bayes algorithm.
2. Find accuracy.
3. To visualize correlation using a heat map.
Code:
Output:
Experiment No: 14
Activation Function
Date:
Relevant CO: Comprehend basic concepts of Neural network and its use in machine
learning.
Objectives:
1. To implement activation function.
2. Visualize the working of activation function
Code:
Output:
Introduction to Machine Learning (3171114)
Experiment No: 15
Date:
Relevant CO: Comprehend basic concepts of Neural network and its use in machine
learning.
Objectives:
1. To implement Convolution Neural Network.
2. Find out the total parameters of Convolution Neural Network.
Code:
import torch
import torch.nn as nn
import torch.nn.functional as F
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.conv1 = nn.Conv2d(1, 5, 3) # 3x3 kernel
self.conv2 = nn.Conv2d(5, 5, 3) # 3x3 kernel
self.fc1 = nn.Linear(5*3*3, 10)
x=self.conv1(x)
x=self.conv2(x)
x=self.fc1(x)
return x
net = Net()
print("Model is ",net)
model_params = sum(p.numel() for p in net.parameters())
print("\n Model parameter is ",model_params)
Output:
Model is Net(
(conv1): Conv2d(1, 5, kernel_size=(3, 3), stride=(1, 1))
(conv2): Conv2d(5, 5, kernel_size=(3, 3), stride=(1, 1))
(fc1): Linear(in_features=45, out_features=10, bias=True)
)
Model parameter is 740
Introduction to Machine Learning (3171114)