0% found this document useful (0 votes)
12 views14 pages

Practical - 2 Aim: Study of Various Machine Learning Libraries. (Scipy, Keras, Scikit-Learn, Pytorch, Tensorflow, Seaborn, Plotly)

The document outlines a practical study of various Machine Learning libraries including SciPy, Keras, and SciKit-Learn. It provides code examples demonstrating the use of these libraries for tasks such as data manipulation, neural network modeling, and machine learning algorithms. The document emphasizes the importance of these libraries in facilitating machine learning tasks in Python.

Uploaded by

butlerkyle428
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
12 views14 pages

Practical - 2 Aim: Study of Various Machine Learning Libraries. (Scipy, Keras, Scikit-Learn, Pytorch, Tensorflow, Seaborn, Plotly)

The document outlines a practical study of various Machine Learning libraries including SciPy, Keras, and SciKit-Learn. It provides code examples demonstrating the use of these libraries for tasks such as data manipulation, neural network modeling, and machine learning algorithms. The document emphasizes the importance of these libraries in facilitating machine learning tasks in Python.

Uploaded by

butlerkyle428
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 14

IU2141230089 DSC(CE0630) 6CSE - B1

PRACTICAL - 2
Aim: Study of various Machine Learning libraries. [SciPy, Keras, SciKit-
Learn, PyTorch, TensorFlow, Seaborn,Plotly ].
Scipy:
SciPy is a collection of mathematical algorithms and convenience functions built on NumPy
. It adds significant power to Python by providing the user with high-level commands and
classes for manipulating and visualizing data.
image.png
from scipy import linalg, optimize, integrate

from scipy.integrate import quad


def integrand(x, a, b):
return a*x**2 + b
a = 2
b = 1
I = quad(integrand, 0, 1, args=(a,b))
I

(1.6666666666666667, 1.8503717077085944e-14)

image.png
from scipy.integrate import dblquad
area = dblquad(lambda x, y: x*y, 0, 0.5, lambda x: 0, lambda x: 1-2*x)
area

(0.010416666666666668, 4.101620128472366e-16)

import numpy as np
from scipy.interpolate import CubicSpline, PchipInterpolator,
Akima1DInterpolator
x = np.array([1., 2., 3., 4., 4.5, 5., 6., 7., 8])
y = x**2
y[4] += 101

import matplotlib.pyplot as plt


xx = np.linspace(1, 8, 51)
plt.plot(xx, CubicSpline(x, y)(xx), '--', label='spline')
plt.plot(xx, Akima1DInterpolator(x, y)(xx), '-', label='Akima1D')
plt.plot(xx, PchipInterpolator(x, y)(xx), '-', label='pchip')
plt.plot(x, y, 'o')
plt.legend()
plt.show()
IU2141230089 DSC(CE0630) 6CSE - B1

KERAS:
Keras is an open-source library that provides a Python interface for artificial neural
networks. Keras acts as an interface for the TensorFlow library. Designed to enable fast
experimentation with deep neural networks, Keras focuses on being user-friendly,
modular, and extensible.
import numpy as np
import os

os.environ["KERAS_BACKEND"] = "jax"

# Note that Keras should only be imported after the backend


# has been configured. The backend cannot be changed once the
# package is imported.
import keras

# Load the data and split it between train and test sets
(x_train, y_train), (x_test, y_test) = keras.datasets.mnist.load_data()

# Scale images to the [0, 1] range


x_train = x_train.astype("float32") / 255
x_test = x_test.astype("float32") / 255
# Make sure images have shape (28, 28, 1)
x_train = np.expand_dims(x_train, -1)
x_test = np.expand_dims(x_test, -1)
IU2141230089 DSC(CE0630) 6CSE - B1

print("x_train shape:", x_train.shape)


print("y_train shape:", y_train.shape)
print(x_train.shape[0], "train samples")
print(x_test.shape[0], "test samples")

Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-


datasets/mnist.npz
11490434/11490434 [==============================] - 0s 0us/step
x_train shape: (60000, 28, 28, 1)
y_train shape: (60000,)
60000 train samples
10000 test samples

#model parameters
num_classes = 10
input_shape = (28, 28, 1)

model = keras.Sequential(
[
keras.layers.Input(shape=input_shape),
keras.layers.Conv2D(64, kernel_size=(3, 3), activation="relu"),
keras.layers.Conv2D(64, kernel_size=(3, 3), activation="relu"),
keras.layers.MaxPooling2D(pool_size=(2, 2)),
keras.layers.Conv2D(128, kernel_size=(3, 3), activation="relu"),
keras.layers.Conv2D(128, kernel_size=(3, 3), activation="relu"),
keras.layers.GlobalAveragePooling2D(),
keras.layers.Dropout(0.5),
keras.layers.Dense(num_classes, activation="softmax"),
]
)

model.summary()

Model: "sequential_1"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv2d_4 (Conv2D) (None, 26, 26, 64) 640

conv2d_5 (Conv2D) (None, 24, 24, 64) 36928

max_pooling2d_1 (MaxPoolin (None, 12, 12, 64) 0


g2D)

conv2d_6 (Conv2D) (None, 10, 10, 128) 73856

conv2d_7 (Conv2D) (None, 8, 8, 128) 147584

global_average_pooling2d_1 (None, 128) 0


(GlobalAveragePooling2D)
IU2141230089 DSC(CE0630) 6CSE - B1

dropout_1 (Dropout) (None, 128) 0

dense_1 (Dense) (None, 10) 1290

=================================================================
Total params: 260298 (1016.79 KB)
Trainable params: 260298 (1016.79 KB)
Non-trainable params: 0 (0.00 Byte)
_________________________________________________________________

model.compile(
loss=keras.losses.SparseCategoricalCrossentropy(),
optimizer=keras.optimizers.Adam(learning_rate=1e-3),
metrics=[
keras.metrics.SparseCategoricalAccuracy(name="acc"),
],
)

batch_size = 128
epochs = 2

callbacks = [
keras.callbacks.ModelCheckpoint(filepath="model_at_epoch_{epoch}.keras"),
keras.callbacks.EarlyStopping(monitor="val_loss", patience=2),
]

model.fit(
x_train,
y_train,
batch_size=batch_size,
epochs=epochs,
validation_split=0.15,
callbacks=callbacks,
)
score = model.evaluate(x_test, y_test, verbose=0)

Epoch 1/2
399/399 [==============================] - 298s 741ms/step - loss: 0.1371 -
acc: 0.9601 - val_loss: 0.0545 - val_acc: 0.9839
Epoch 2/2
399/399 [==============================] - 290s 727ms/step - loss: 0.1095 -
acc: 0.9681 - val_loss: 0.0444 - val_acc: 0.9888

model.save("final_model.keras")

model = keras.saving.load_model("final_model.keras")

predictions = model.predict(x_test)

313/313 [==============================] - 17s 53ms/step


IU2141230089 DSC(CE0630) 6CSE - B1

predictions

array([[1.5613863e-06, 1.1609918e-05, 4.8638083e-04, ..., 9.9542660e-01,


3.9146531e-07, 3.7584922e-03],
[9.4122279e-06, 1.4619279e-05, 9.9968886e-01, ..., 6.7920735e-07,
1.3156673e-06, 3.9925130e-06],
[3.5354977e-05, 9.9578518e-01, 1.4452892e-05, ..., 1.9032742e-03,
1.2317665e-04, 3.2255653e-04],
...,
[1.5561769e-11, 3.5657360e-10, 5.6928425e-12, ..., 2.8625214e-14,
7.3645079e-10, 3.3094822e-09],
[6.5197838e-07, 2.1598499e-15, 1.7285150e-07, ..., 5.2790702e-11,
2.5180117e-03, 2.4533743e-05],
[1.0970687e-05, 4.3153437e-13, 1.5449012e-03, ..., 7.0275539e-16,
4.8256708e-05, 3.2227177e-07]], dtype=float32)

SciKit-Learn:
Scikit-Learn is a machine learning library for the Python programming language. It has a
large number of algorithms that can be readily deployed by programmers and data
scientists in machine learning models. Built on NumPy, SciPy, and Matplotlib.
from sklearn import datasets
iris = datasets.load_iris()
digits = datasets.load_digits()

iris.data

array([[5.1, 3.5, 1.4, 0.2],


[4.9, 3. , 1.4, 0.2],
[4.7, 3.2, 1.3, 0.2],
[4.6, 3.1, 1.5, 0.2],
[5. , 3.6, 1.4, 0.2],
[5.4, 3.9, 1.7, 0.4],
[4.6, 3.4, 1.4, 0.3],
[5. , 3.4, 1.5, 0.2],
[4.4, 2.9, 1.4, 0.2],
[4.9, 3.1, 1.5, 0.1],
[5.4, 3.7, 1.5, 0.2],
[4.8, 3.4, 1.6, 0.2],
[4.8, 3. , 1.4, 0.1],
[4.3, 3. , 1.1, 0.1],
[5.8, 4. , 1.2, 0.2],
[5.7, 4.4, 1.5, 0.4],
[5.4, 3.9, 1.3, 0.4],
[5.1, 3.5, 1.4, 0.3],
[5.7, 3.8, 1.7, 0.3],
[5.1, 3.8, 1.5, 0.3],
[5.4, 3.4, 1.7, 0.2],
[5.1, 3.7, 1.5, 0.4],
[4.6, 3.6, 1. , 0.2],
IU2141230089 DSC(CE0630) 6CSE - B1

[5.1, 3.3, 1.7, 0.5],


[4.8, 3.4, 1.9, 0.2],
[5. , 3. , 1.6, 0.2],
[5. , 3.4, 1.6, 0.4],
[5.2, 3.5, 1.5, 0.2],
[5.2, 3.4, 1.4, 0.2],
[4.7, 3.2, 1.6, 0.2],
[4.8, 3.1, 1.6, 0.2],
[5.4, 3.4, 1.5, 0.4],
[5.2, 4.1, 1.5, 0.1],
[5.5, 4.2, 1.4, 0.2],
[4.9, 3.1, 1.5, 0.2],
[5. , 3.2, 1.2, 0.2],
[5.5, 3.5, 1.3, 0.2],
[4.9, 3.6, 1.4, 0.1],
[4.4, 3. , 1.3, 0.2],
[5.1, 3.4, 1.5, 0.2],
[5. , 3.5, 1.3, 0.3],
[4.5, 2.3, 1.3, 0.3],
[4.4, 3.2, 1.3, 0.2],
[5. , 3.5, 1.6, 0.6],
[5.1, 3.8, 1.9, 0.4],
[4.8, 3. , 1.4, 0.3],
[5.1, 3.8, 1.6, 0.2],
[4.6, 3.2, 1.4, 0.2],
[5.3, 3.7, 1.5, 0.2],
[5. , 3.3, 1.4, 0.2],
[7. , 3.2, 4.7, 1.4],
[6.4, 3.2, 4.5, 1.5],
[6.9, 3.1, 4.9, 1.5],
[5.5, 2.3, 4. , 1.3],
[6.5, 2.8, 4.6, 1.5],
[5.7, 2.8, 4.5, 1.3],
[6.3, 3.3, 4.7, 1.6],
[4.9, 2.4, 3.3, 1. ],
[6.6, 2.9, 4.6, 1.3],
[5.2, 2.7, 3.9, 1.4],
[5. , 2. , 3.5, 1. ],
[5.9, 3. , 4.2, 1.5],
[6. , 2.2, 4. , 1. ],
[6.1, 2.9, 4.7, 1.4],
[5.6, 2.9, 3.6, 1.3],
[6.7, 3.1, 4.4, 1.4],
[5.6, 3. , 4.5, 1.5],
[5.8, 2.7, 4.1, 1. ],
[6.2, 2.2, 4.5, 1.5],
[5.6, 2.5, 3.9, 1.1],
[5.9, 3.2, 4.8, 1.8],
[6.1, 2.8, 4. , 1.3],
[6.3, 2.5, 4.9, 1.5],
IU2141230089 DSC(CE0630) 6CSE - B1

[6.1, 2.8, 4.7, 1.2],


[6.4, 2.9, 4.3, 1.3],
[6.6, 3. , 4.4, 1.4],
[6.8, 2.8, 4.8, 1.4],
[6.7, 3. , 5. , 1.7],
[6. , 2.9, 4.5, 1.5],
[5.7, 2.6, 3.5, 1. ],
[5.5, 2.4, 3.8, 1.1],
[5.5, 2.4, 3.7, 1. ],
[5.8, 2.7, 3.9, 1.2],
[6. , 2.7, 5.1, 1.6],
[5.4, 3. , 4.5, 1.5],
[6. , 3.4, 4.5, 1.6],
[6.7, 3.1, 4.7, 1.5],
[6.3, 2.3, 4.4, 1.3],
[5.6, 3. , 4.1, 1.3],
[5.5, 2.5, 4. , 1.3],
[5.5, 2.6, 4.4, 1.2],
[6.1, 3. , 4.6, 1.4],
[5.8, 2.6, 4. , 1.2],
[5. , 2.3, 3.3, 1. ],
[5.6, 2.7, 4.2, 1.3],
[5.7, 3. , 4.2, 1.2],
[5.7, 2.9, 4.2, 1.3],
[6.2, 2.9, 4.3, 1.3],
[5.1, 2.5, 3. , 1.1],
[5.7, 2.8, 4.1, 1.3],
[6.3, 3.3, 6. , 2.5],
[5.8, 2.7, 5.1, 1.9],
[7.1, 3. , 5.9, 2.1],
[6.3, 2.9, 5.6, 1.8],
[6.5, 3. , 5.8, 2.2],
[7.6, 3. , 6.6, 2.1],
[4.9, 2.5, 4.5, 1.7],
[7.3, 2.9, 6.3, 1.8],
[6.7, 2.5, 5.8, 1.8],
[7.2, 3.6, 6.1, 2.5],
[6.5, 3.2, 5.1, 2. ],
[6.4, 2.7, 5.3, 1.9],
[6.8, 3. , 5.5, 2.1],
[5.7, 2.5, 5. , 2. ],
[5.8, 2.8, 5.1, 2.4],
[6.4, 3.2, 5.3, 2.3],
[6.5, 3. , 5.5, 1.8],
[7.7, 3.8, 6.7, 2.2],
[7.7, 2.6, 6.9, 2.3],
[6. , 2.2, 5. , 1.5],
[6.9, 3.2, 5.7, 2.3],
[5.6, 2.8, 4.9, 2. ],
[7.7, 2.8, 6.7, 2. ],
IU2141230089 DSC(CE0630) 6CSE - B1

[6.3, 2.7, 4.9, 1.8],


[6.7, 3.3, 5.7, 2.1],
[7.2, 3.2, 6. , 1.8],
[6.2, 2.8, 4.8, 1.8],
[6.1, 3. , 4.9, 1.8],
[6.4, 2.8, 5.6, 2.1],
[7.2, 3. , 5.8, 1.6],
[7.4, 2.8, 6.1, 1.9],
[7.9, 3.8, 6.4, 2. ],
[6.4, 2.8, 5.6, 2.2],
[6.3, 2.8, 5.1, 1.5],
[6.1, 2.6, 5.6, 1.4],
[7.7, 3. , 6.1, 2.3],
[6.3, 3.4, 5.6, 2.4],
[6.4, 3.1, 5.5, 1.8],
[6. , 3. , 4.8, 1.8],
[6.9, 3.1, 5.4, 2.1],
[6.7, 3.1, 5.6, 2.4],
[6.9, 3.1, 5.1, 2.3],
[5.8, 2.7, 5.1, 1.9],
[6.8, 3.2, 5.9, 2.3],
[6.7, 3.3, 5.7, 2.5],
[6.7, 3. , 5.2, 2.3],
[6.3, 2.5, 5. , 1.9],
[6.5, 3. , 5.2, 2. ],
[6.2, 3.4, 5.4, 2.3],
[5.9, 3. , 5.1, 1.8]])

print(digits.data)
digits.target

[[ 0. 0. 5. ... 0. 0. 0.]
[ 0. 0. 0. ... 10. 0. 0.]
[ 0. 0. 0. ... 16. 9. 0.]
...
[ 0. 0. 1. ... 6. 0. 0.]
[ 0. 0. 2. ... 12. 0. 0.]
[ 0. 0. 10. ... 12. 1. 0.]]

array([0, 1, 2, ..., 8, 9, 8])

from sklearn import svm


clf = svm.SVC(gamma=0.001, C=100.)

clf.fit(digits.data[:-1], digits.target[:-1])

SVC(C=100.0, gamma=0.001)

clf.predict(digits.data[-1:])

array([8])
IU2141230089 DSC(CE0630) 6CSE - B1

PyTorch:
PyTorch is a machine learning framework based on the Torch library, used for applications
such as computer vision and natural language processing, originally developed by Meta AI
and now part of the Linux Foundation umbrella. It is free and open-source software.
import torch
x = torch.rand(5, 3)
print(x)

tensor([[0.8090, 0.6321, 0.2694],


[0.3365, 0.1680, 0.5311],
[0.9948, 0.9386, 0.0314],
[0.1260, 0.9919, 0.6691],
[0.6059, 0.1158, 0.4682]])

torch.is_tensor(x)

True

a = torch.randn(4)
print(a)
torch.acos(a)

tensor([ 0.9397, 0.0934, 0.2171, -1.6181])

tensor([0.3492, 1.4772, 1.3520, nan])

a = torch.randn(4)
print(f"a = {a}")
torch.add(a, 20)

b = torch.randn(4)
print(f"b = {b}")
c = torch.randn(4, 1)
print(f"c = {c}")
torch.add(b, c, alpha=10)

a = tensor([ 0.0743, -0.9726, -0.6212, 1.0610])


b = tensor([-0.4157, 0.5188, -0.9571, -1.1798])
c = tensor([[-0.5311],
[-0.7499],
[ 0.5198],
[-0.1756]])

tensor([[-5.7270, -4.7925, -6.2685, -6.4911],


[-7.9143, -6.9798, -8.4557, -8.6784],
[ 4.7822, 5.7167, 4.2407, 4.0181],
[-2.1721, -1.2376, -2.7136, -2.9362]])

data = torch.arange(12, dtype=torch.float).reshape(3, 4)


print(f"data - {data}")
IU2141230089 DSC(CE0630) 6CSE - B1

data - tensor([[ 0., 1., 2., 3.],


[ 4., 5., 6., 7.],
[ 8., 9., 10., 11.]])

torch.arange(13).chunk(6)

(tensor([0, 1, 2]),
tensor([3, 4, 5]),
tensor([6, 7, 8]),
tensor([ 9, 10, 11]),
tensor([12]))

x = torch.randn(2, 3)
print(f"x = {x}")
torch.transpose(x, 0, 1)

x = tensor([[-1.4354, 0.3738, -0.7075],


[ 0.4503, 0.9296, -0.5431]])

tensor([[-1.4354, 0.4503],
[ 0.3738, 0.9296],
[-0.7075, -0.5431]])

Tensorflow:
TensorFlow is an end-to-end open source platform for machine learning. It has a
comprehensive, flexible ecosystem of tools, libraries, and community resources that lets
researchers push the state-of-the-art in ML, and gives developers the ability to easily build
and deploy ML-powered applications.
import tensorflow as tf

tensor_a = tf.constant([[1,2]], dtype = tf.int32)


tensor_b = tf.constant([[3, 4]], dtype = tf.int32)
tensor_add = tf.add(tensor_a, tensor_b)
print(tensor_add)

tf.Tensor([[4 6]], shape=(1, 2), dtype=int32)

##Seaborn Seaborn is a library for creating statistical graphics in Python. It is based on


Matplotlib and integrates with Pandas data structures.
This library is as powerful as Matplotlib but brings simplicity and unique features. It allows
for quick data exploration and understanding.
Complete data frames can be captured, and internal functions for semantic mapping and
statistical aggregation allow you to convert data into graphical visualizations.
Seaborn abstracts away all the complexity of Matplotlib. However, it is still possible to
create graphics that meet all your needs and requirements
IU2141230089 DSC(CE0630) 6CSE - B1

# Import seaborn
import seaborn as sns

# Apply the default theme


sns.set_theme()

# Load an example dataset


tips = sns.load_dataset("tips")

# Create a visualization
sns.relplot(
data=tips,
x="total_bill", y="tip", col="time",
hue="smoker", style="smoker", size="size",
)

<seaborn.axisgrid.FacetGrid at 0x7b3d85e55120>

dots = sns.load_dataset("dots")
sns.relplot(
data=dots, kind="line",
x="time", y="firing_rate", col="align",
hue="choice", size="coherence", style="choice",
facet_kws=dict(sharex=False),
)

<seaborn.axisgrid.FacetGrid at 0x7b3d86febbb0>
IU2141230089 DSC(CE0630) 6CSE - B1

tips = sns.load_dataset("tips")
g = sns.FacetGrid(tips, col="time")
g.map(sns.histplot, "tip")

<seaborn.axisgrid.FacetGrid at 0x7b3d89247cd0>

g = sns.FacetGrid(tips, col="sex", hue="smoker")


g.map(sns.scatterplot, "total_bill", "tip", alpha=.7)
g.add_legend()

<seaborn.axisgrid.FacetGrid at 0x7b3d890ed570>
IU2141230089 DSC(CE0630) 6CSE - B1

Plotly:
The plotly Python library is an interactive, open-source plotting library that supports over
40 unique chart types covering a wide range of statistical, financial, geographic, scientific,
and 3-dimensional use-cases.
Built on top of the Plotly JavaScript library (plotly.js), plotly enables Python users to create
beautiful interactive web-based visualizations that can be displayed in Jupyter notebooks,
saved to standalone HTML files, or served as part of pure Python-built web applications
using Dash. The plotly Python library is sometimes referred to as "plotly.py" to differentiate
it from the JavaScript library.
import plotly.express as px
fig = px.scatter(x=[0, 1, 2, 3, 4], y=[0, 1, 4, 9, 16])
fig.show()

df = px.data.iris()
fig = px.scatter(df, x="sepal_width", y="sepal_length", color="species",
size='petal_length', hover_data=['petal_width'])
fig.show()
IU2141230089 DSC(CE0630) 6CSE - B1

df = px.data.tips()
fig = px.scatter(df, x="total_bill", y="tip", color="smoker",
facet_col="sex", facet_row="time")
fig.show()

df = px.data.tips()
fig = px.scatter(df, x="total_bill", y="tip", trendline="ols")
fig.show()

You might also like