SlideShare a Scribd company logo
Machine Learning
With
Hi, I’m Riza
Agenda
• Introduction PyTorch

• Getting Started with PyTorch

• Handling Datasets

• Build Simple Neural network with PyTorch
Introduction
Scientific computing package to replace NumPy to use
the power of GPU.
A deep learning research platform that provide
maximum flexibility and speed
A complete Python rewrite of Machine Learning library
called Torch, written in Lua
Chainer — Deep learning library, huge in NLP
community, big inspiration to PyTorch Team
HIPS Autograd - Automatic differentiation library,
become on of big feature of PyTorch
In need of dynamic execution
January 2017
PyTorch was born 🍼
July 2017
Kaggle Data Science Bowl won using PyTorch 🎉
August 2017
PyTorch 0.2 🚢
September 2017
fast.ai switch to PyTorch 🚀
October 2017
SalesForce releases QRNN 🖖
November 2017
Uber releases Pyro 🚗
December 2017
PyTorch 0.3 release! 🛳
2017 in review
Machine learning with py torch
Killer Features
Just Python
On Steroid
Dynamic computation allows flexibility
of input
Best suited for research and
prototyping
Summary
• PyTorch is Python machine learning library focus
on research purposes
• Released on January 2017, used by tech
companies and universities
• Dynamic and pythonic way to do machine
learning
Getting Started
Installa7on
$ pip install pytorch torchvision -c pytorch # macos
$ pip install pytorch-cpu torchvision-cpu -c pytorch # linux
More: https://pytorch.org/get-started/locally/
Tensors
Scalar
Rank: 0
Dimension: ()
3.14Scalar
Tensors
Scalar
import torch
pi = torch.tensor(3.14)
print(x) # tensor(3.1400)
Tensors
Vector
Rank: 1
Dimension: (3,)
[3, 4, 8]Vector
Tensors
Vector
import torch
vector = torch.Tensor(3,)
print(vector)
# tensor([ 0.0000e+00, 3.6893e+19, -7.6570e-25])
Tensors
Matrix
Rank: 2
Dimension: (2, 3)
[[1, 2, 3],
[4, 5, 6]]Matrix
Tensors
Matrix
import torch
matrix = torch.Tensor(2, 3)
print(matrix)
# tensor([[0.0000e+00, 1.5846e+29, 2.8179e+26],
# [1.0845e-19, 4.2981e+21, 6.3828e+28]])
Tensors
Tensor
Rank: 3
Dimension: (2, 2, 3)
[[[1, 2, 3],
[4, 5, 6]], [[7, 8, 9],
[10, 11, 12]]]Tensor
Tensors
Tensor
import torch
tensor = torch.Tensor(2, 2, 3)
print(tensor)
# tensor([[[ 0.0000e+00, 3.6893e+19, 0.0000e+00],
# [ 3.6893e+19, 4.2039e-45, 3.6893e+19]],
# [[ 1.6986e+06, -2.8643e-42, 4.2981e+21],
# [ 6.3828e+28, 3.8016e-39, 2.7551e-40]]])
Operators
import torch
x = torch.Tensor(5, 3)
# Randomize Tensor
y = torch.rand(5, 3)
# Add
print(x + y) # or
print(torch.add(x, y))
# Matrix Multiplication
a = torch.randn(2, 3)
b = torch.randn(3, 3)
print(torch.mm(a, b))
https://pytorch.org/docs/stable/tensors.html
Working With
import torch
a = torch.ones(5)
print(a) # tensor([1., 1., 1., 1., 1.])
b = a.numpy()
print(b) # [1. 1. 1. 1. 1.]
import numpy as np
a = np.ones(5)
b = torch.from_numpy(a)
np.add(a, 1, out=a)
print(a) "#[2. 2. 2. 2. 2.]
print(b)
#tensor([2., 2., 2., 2., 2.], dtype=torch.float64)
Working With GPU
import torch
x = torch.Tensor(5, 3)
y = torch.rand(5, 3)
if torch.cuda.is_available():
x = x.cuda()
y = y.cuda()
x + y
Summary
• Tensor is like rubiks or multidimentional array
• Scalar, Vector, Matrix and Tensor is the same
with different dimension
• We can use torch.Tensor() to create a
tensor.
Autograd
Differen7a7on
Refresher
y = f(x) = 2xIF THEN
dy
dx
= 2
IF THENy = f(x1, x2,…,xn) [
dy
dx1
,
dy
dx2
, . . . ,
dy
dxn
]
Is the gradient of y w.r.t [x1, x2, …, xn]
Autograd
• Calculus chain rule on steroid
• Derivative of function within a function
• Complex functions can be written as many
compositions of simple functions
• Provides auto differentiation on all tensor
operations
• In torch.autograd module
Variable
• Crucial data structure, needed for automatic
differentiation
• Wrapper around Tensor
• Records reference to the creator function
Variable
import torch
from torch.autograd import Variable
x = Variable(torch.FloatTensor([11.2]),
requires_grad=True)
y = 2 * x
print(x)
# tensor([11.2000], requires_grad=True)
print(y)
# tensor([22.4000], grad_fn=<MulBackward>)
print(x.data) # tensor([11.2000])
print(y.data) # tensor([22.4000])
print(x.grad_fn) # None
print(y.grad_fn)
# <MulBackward object at 0x10ae58e48>
y.backward() # Calculates the gradients
print(x.grad) # tensor([2.])
Summary
• Autograd provides auto differentiation on all tensor
operations, inside torch.autograd module
• Variable is wrapper around Tensor that will records
reference to the creator function
Handling DataSets
Dataset Collection of training examples
Epochs
One pass of the entire dataset
through your model

Batch
A subset of training examples passed
through your model at a time
Itera7on A single pass of a batch
Example
1,000 images of dataset
1 epochs
Batch size of 50
Leads to 20 iterations
Access Dataset
CIFAR10 dataset via torchvision
import torch
import torchvision
import torchvision.transforms as transforms
import matplotlib.pyplot as plt
Access Dataset
Transform the data
transform = transforms.Compose([
transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))])
Access Dataset
Prepare train data
trainset = torchvision.datasets.CIFAR10(root='./data', train=True,
download=True, transform=transform)
print(len(trainset.train_data)) # 5000
print(trainset.train_labels[1]) # 9 = Truck image
Access Dataset
Prepare train loader
trainloader = torch.utils.data.DataLoader(
trainset, batch_size=10, shuffle=True, num_workers=2)
Access Dataset
Iterate data and train the model
for i, data in enumerate(trainloader):
data, labels = data
print(type(data)) # <class 'torch.Tensor'>
print(data.size()) # torch.Size([10, 3, 32, 32])
print(type(labels)) # <class 'torch.Tensor'>
print(labels.size()) # torch.Size([10])
# Model training happens here""...
Datasets
• COCO — large-scale object detection, segmentation, and
captioning dataset. 
• MNIST — handwritten digit database
• Fashion-MNIST — fashion product database
• LSUN — Large-scale Image Dataset 
• Much more…
Others
Summary
• We can use existing dataset provided by torch and
torchvision such as CIFAR10
• Dataset is training example, epochs is on pass
througout the model, batch is subset of training 

model and iteration is a single pass of one batch
Neural Network
Neural Network
Input Hidden 1 Hidden 2 Output
Layers
Neural Network
A Neuron
Output
ip1
ip2
ip3
w1
w2
w3
ip1*w1
ip2*w2
ip3*w3
bias fn()
Output = fn(w1 * ip1 + w2 * ip2 + w3* ip3 + bias)
Input Vector
Weights Vector
Activation Function
Ac7va7on Func7ons
Sigmoid
f(x) =
1
1 + e−x
Ac7va7on Func7ons
Tanh
f(x) =
ex
− e−x
ex + e−x
Ac7va7on Func7ons
Rectified Linear Unit
f(x) = max(x,0)
Summary
• Neural net is a collection of neurons that related to 

each other consist of input, weight, bias and output.
• To generate an output we need to activate it using
activation function such as Sigmoid, tanh or ReLU.
Our First Neural Network
Feed Forward NN
The Iris
Classify flower based on it’s structure
Feed Forward NN
The Model
Feed Forward NN
The Dataset
Table 1
sepal_length_cm sepal_width_cm petal_length_cm petal_width_cm class
5.1 3.5 1.4 0.2 Iris-setosa
4.9 3.0 1.4 0.2 Iris-setosa
7.0 3.2 4.7 1.4 Iris-versicolor
6.4 3.2 4.5 1.5 Iris-versicolor
6.9 3.1 4.9 1.5 Iris-versicolor
6.4 2.8 5.6 2.2 Iris-virginica
6.3 2.8 5.1 1.5 Iris-virginica
6.1 2.6 5.6 1.4 Iris-virginica
The Iris
import torch
import torch.nn as nn
import matplotlib.pyplot as plt
from torch.autograd import Variable
from data import iris
Import things
The Iris
class IrisNet(nn.Module):
def "__init"__(self, input_size,
hidden1_size, hidden2_size, num_classes):
super(IrisNet, self)."__init"__()
self.layer1 = nn.Linear(input_size, hidden1_size)
self.act1 = nn.ReLU()
self.layer2 = nn.Linear(hidden1_size, hidden2_size)
self.act2 = nn.ReLU()
self.layer3 = nn.Linear(hidden2_size, num_classes)
def forward(self, x):
out = self.layer1(x)
out = self.act1(out)
out = self.layer2(out)
out = self.act2(out)
out = self.layer3(out)
return out
model = IrisNet(4, 100, 50, 3)
print(model)
Create Module and Instance
The Iris
batch_size = 60
iris_data_file = 'data/iris.data.txt'
train_ds, test_ds = iris.get_datasets(iris_data_file)
train_loader = torch.utils.data.DataLoader(dataset=train_ds,
batch_size=batch_size, shuffle=True)
test_loader = torch.utils.data.DataLoader(dataset=test_ds,
batch_size=batch_size, shuffle=True)
DataLoader
Loss Func7on
Output/Prediction
Actual Result
Loss Function
Loss Score
Input
Loss Func7on
In PyTorch
• L1Loss
• MSELoss
• CrossEntropyLoss
• BCELoss
• SoftMarginLoss
• More: https://pytorch.org/docs/stable/nn.html?#loss-functions
Loss Func7on
CrossEntropyLoss
Measures the performance of a classification model
whose output is a probability value between 0 and 1.
🍎 🍌 🍍
Prediction 0.02 0.88 0.1 Actual
🍌
Loss Score 0.98 0.12 0.9
Op7mizer Func7on
Output/Prediction
Actual Result
Loss Function
Loss Score
Input
Calculate Gradients
Optimizer
Iteration
Op7mizer Func7on
In PyTorch
• Socastic Gradient Descend (SGD)
• Adam
• Adadelta
• RMSprop
Back to Iris
Let’s Train The Neural Network
net = IrisNet(4, 100, 50, 3)
# Loss Function
criterion = nn.CrossEntropyLoss()
# Optimizer
learning_rate = 0.001
optimizer = torch.optim.SGD(net.parameters(),
lr=learning_rate,
nesterov=True,
momentum=0.9,
dampening=0)
Back to Iris
Let’s Train The Neural Network
num_epochs = 500
for epoch in range(num_epochs):
train_correct = 0
train_total = 0
for i, (items, classes) in enumerate(train_loader):
# Convert torch tensor to Variable
items = Variable(items)
classes = Variable(classes)
Back to Iris
Let’s Train The Neural Network
net.train() # Training mode
optimizer.zero_grad() # Reset gradients from past operation
outputs = net(items) # Forward pass
loss = criterion(outputs, classes) # Calculate the loss
loss.backward() # Calculate the gradient
optimizer.step() # Adjust weight based on gradients
train_total += classes.size(0)
_, predicted = torch.max(outputs.data, 1)
train_correct += (predicted "== classes.data).sum()
print('Epoch %d/%d, Iteration %d/%d, Loss: %.4f'
%(epoch+1, num_epochs, i+1,
len(train_ds)"//batch_size, loss.data[0]))
Back to Iris
Let’s Train The Neural Network
net.eval() # Put the network into evaluation mode
train_loss.append(loss.data[0])
train_accuracy.append((100 * train_correct / train_total))
# Record the testing loss
test_items = torch.FloatTensor(test_ds.data.values[:, 0:4])
test_classes = torch.LongTensor(test_ds.data.values[:, 4])
outputs = net(Variable(test_items))
loss = criterion(outputs, Variable(test_classes))
test_loss.append(loss.data[0])
# Record the testing accuracy
_, predicted = torch.max(outputs.data, 1)
total = test_classes.size(0)
correct = (predicted "== test_classes).sum()
test_accuracy.append((100 * correct / total))
Back to Iris
Let’s Train The Neural Network
import torch
import torch.nn as nn
import matplotlib.pyplot as plt
from torch.autograd import Variable
from data import iris
# Create the module
class IrisNet(nn.Module):
def "__init"__(self, input_size, hidden1_size, hidden2_size, num_classes):
super(IrisNet, self)."__init"__()
self.layer1 = nn.Linear(input_size, hidden1_size)
self.act1 = nn.ReLU()
self.layer2 = nn.Linear(hidden1_size, hidden2_size)
self.act2 = nn.ReLU()
self.layer3 = nn.Linear(hidden2_size, num_classes)
def forward(self, x):
out = self.layer1(x)
out = self.act1(out)
out = self.layer2(out)
out = self.act2(out)
out = self.layer3(out)
return out
# Create a model instance
model = IrisNet(4, 100, 50, 3)
print(model)
# Create the DataLoader
batch_size = 60
iris_data_file = 'data/iris.data.txt'
train_ds, test_ds = iris.get_datasets(iris_data_file)
print('# instances in training set: ', len(train_ds))
print('# instances in testing/validation set: ', len(test_ds))
train_loader = torch.utils.data.DataLoader(dataset=train_ds, batch_size=batch_size, shuffle=True)
test_loader = torch.utils.data.DataLoader(dataset=test_ds, batch_size=batch_size, shuffle=True)
# Model
net = IrisNet(4, 100, 50, 3)
# Loss Function
criterion = nn.CrossEntropyLoss()
# Optimizer
learning_rate = 0.001
optimizer = torch.optim.SGD(net.parameters(),
lr=learning_rate,
nesterov=True,
momentum=0.9,
dampening=0)
# Training iteration
num_epochs = 500
train_loss = []
test_loss = []
train_accuracy = []
test_accuracy = []
for epoch in range(num_epochs):
train_correct = 0
train_total = 0
for i, (items, classes) in enumerate(train_loader):
# Convert torch tensor to Variable
items = Variable(items)
classes = Variable(classes)
net.train() # Training mode
optimizer.zero_grad() # Reset gradients from past operation
outputs = net(items) # Forward pass
loss = criterion(outputs, classes) # Calculate the loss
loss.backward() # Calculate the gradient
optimizer.step() # Adjust weight/parameter based on gradients
train_total += classes.size(0)
_, predicted = torch.max(outputs.data, 1)
train_correct += (predicted "== classes.data).sum()
print('Epoch %d/%d, Iteration %d/%d, Loss: %.4f'
%(epoch+1, num_epochs, i+1, len(train_ds)"//batch_size, loss.data[0]))
net.eval() # Put the network into evaluation mode
train_loss.append(loss.data[0])
train_accuracy.append((100 * train_correct / train_total))
# Record the testing loss
test_items = torch.FloatTensor(test_ds.data.values[:, 0:4])
test_classes = torch.LongTensor(test_ds.data.values[:, 4])
outputs = net(Variable(test_items))
loss = criterion(outputs, Variable(test_classes))
test_loss.append(loss.data[0])
# Record the testing accuracy
_, predicted = torch.max(outputs.data, 1)
total = test_classes.size(0)
correct = (predicted "== test_classes).sum()
test_accuracy.append((100 * correct / total))
Machine learning with py torch
Summary
• Created feed forward neural network to predict a
type of flower
• Start from read the dataset and dataloader
• Choose a loss function and optimizer
• Train and evaluation
What’s Next?!
• pytorch.org/tutorials

• course.fast.ai

• coursera.org/learn/machine-learning

• kaggle.com/datasets
That’s All From me
github.com/rizafahmi
slideshare.net/rizafahmi
rizafahmi@gmail.com
twi8er.com/rizafahmi22
facebook.com/rizafahmi

More Related Content

PDF
PyTorch Python Tutorial | Deep Learning Using PyTorch | Image Classifier Usin...
Edureka!
 
PDF
Training Neural Networks
Databricks
 
PPTX
Pytorch
ehsan tr
 
PPTX
Introduction to PyTorch
Jun Young Park
 
PPTX
Deep learning
Rajgupta258
 
PPTX
Intro to deep learning
David Voyles
 
PDF
Deep Learning Tutorial | Deep Learning Tutorial for Beginners | Neural Networ...
Edureka!
 
PDF
Introduction to Deep Learning, Keras, and TensorFlow
Sri Ambati
 
PyTorch Python Tutorial | Deep Learning Using PyTorch | Image Classifier Usin...
Edureka!
 
Training Neural Networks
Databricks
 
Pytorch
ehsan tr
 
Introduction to PyTorch
Jun Young Park
 
Deep learning
Rajgupta258
 
Intro to deep learning
David Voyles
 
Deep Learning Tutorial | Deep Learning Tutorial for Beginners | Neural Networ...
Edureka!
 
Introduction to Deep Learning, Keras, and TensorFlow
Sri Ambati
 

What's hot (20)

PDF
Convolutional Neural Networks (CNN)
Gaurav Mittal
 
PPTX
Transfer Learning and Fine-tuning Deep Neural Networks
PyData
 
PPTX
Introduction to Keras
John Ramey
 
PPTX
Machine learning seminar ppt
RAHUL DANGWAL
 
PDF
Deep Learning - Convolutional Neural Networks
Christian Perone
 
PPTX
HML: Historical View and Trends of Deep Learning
Yan Xu
 
PPTX
What is TensorFlow? | Introduction to TensorFlow | TensorFlow Tutorial For Be...
Simplilearn
 
PDF
Keras Tutorial For Beginners | Creating Deep Learning Models Using Keras In P...
Edureka!
 
PDF
Optimization for Deep Learning
Sebastian Ruder
 
PDF
Introduction to Machine Learning Classifiers
Functional Imperative
 
PPTX
Deep neural networks
Si Haem
 
PPTX
Deep learning: Overfitting , underfitting, and regularization
Aly Abdelkareem
 
PDF
Deep Learning Introduction Lecture
shivam chaurasia
 
PPTX
Introduction to Deep Learning
Oswald Campesato
 
PPTX
Convolutional neural network
Ferdous ahmed
 
PDF
Introduction to Deep Learning (NVIDIA)
Rakuten Group, Inc.
 
PDF
GAN - Theory and Applications
Emanuele Ghelfi
 
PDF
Deep learning
Mohamed Loey
 
PPTX
Machine Learning Models in Production
DataWorks Summit
 
Convolutional Neural Networks (CNN)
Gaurav Mittal
 
Transfer Learning and Fine-tuning Deep Neural Networks
PyData
 
Introduction to Keras
John Ramey
 
Machine learning seminar ppt
RAHUL DANGWAL
 
Deep Learning - Convolutional Neural Networks
Christian Perone
 
HML: Historical View and Trends of Deep Learning
Yan Xu
 
What is TensorFlow? | Introduction to TensorFlow | TensorFlow Tutorial For Be...
Simplilearn
 
Keras Tutorial For Beginners | Creating Deep Learning Models Using Keras In P...
Edureka!
 
Optimization for Deep Learning
Sebastian Ruder
 
Introduction to Machine Learning Classifiers
Functional Imperative
 
Deep neural networks
Si Haem
 
Deep learning: Overfitting , underfitting, and regularization
Aly Abdelkareem
 
Deep Learning Introduction Lecture
shivam chaurasia
 
Introduction to Deep Learning
Oswald Campesato
 
Convolutional neural network
Ferdous ahmed
 
Introduction to Deep Learning (NVIDIA)
Rakuten Group, Inc.
 
GAN - Theory and Applications
Emanuele Ghelfi
 
Deep learning
Mohamed Loey
 
Machine Learning Models in Production
DataWorks Summit
 
Ad

Similar to Machine learning with py torch (20)

PDF
pytdddddddddddddddddddddddddddddddddorch.pdf
drjigarsoni28
 
PDF
Dive Into PyTorch
Illarion Khlestov
 
PPTX
2Wisjshsbebe pehele isienew Dorene isksnwnw
YashAbhayKawdiyaH44
 
PDF
PyTorch for Deep Learning Practitioners
Bayu Aldi Yansyah
 
PDF
Pytorch for tf_developers
Abdul Muneer
 
PPTX
PyTorch Tutorial for NTU Machine Learing Course 2017
Yu-Hsun (lymanblue) Lin
 
PPTX
[Update] PyTorch Tutorial for NTU Machine Learing Course 2017
Yu-Hsun (lymanblue) Lin
 
PDF
PyTorch Introduction
Yash Kawdiya
 
PPTX
Pytroch-basic.pptx
rebeen4
 
PPTX
Pytorch and Machine Learning for the Math Impaired
Tyrel Denison
 
PDF
pytorch-cheatsheet.pdf for ML study with pythroch
JunZhao68
 
PDF
Pytorch A Detailed Overview Agladze Mikhail
ilzobrzan47
 
PDF
OpenPOWER Workshop in Silicon Valley
Ganesan Narayanasamy
 
PDF
Icpp power ai-workshop 2018
Ganesan Narayanasamy
 
PPTX
pytorch_tutorial_follow_this_to_start.pptx
gyungmindenniskim
 
PDF
A Tale of Three Deep Learning Frameworks: TensorFlow, Keras, & PyTorch with B...
Databricks
 
PDF
Reproducible AI using MLflow and PyTorch
Databricks
 
PDF
01_pytorch_workflow jutedssd huge hhgggdf
stuartkyeswa4
 
PDF
"PyTorch Deep Learning Framework: Status and Directions," a Presentation from...
Edge AI and Vision Alliance
 
PPTX
Demystifying-AI-Frameworks-TensorFlow-PyTorch-JAX-and-More (1).pptx
Anant Garg
 
pytdddddddddddddddddddddddddddddddddorch.pdf
drjigarsoni28
 
Dive Into PyTorch
Illarion Khlestov
 
2Wisjshsbebe pehele isienew Dorene isksnwnw
YashAbhayKawdiyaH44
 
PyTorch for Deep Learning Practitioners
Bayu Aldi Yansyah
 
Pytorch for tf_developers
Abdul Muneer
 
PyTorch Tutorial for NTU Machine Learing Course 2017
Yu-Hsun (lymanblue) Lin
 
[Update] PyTorch Tutorial for NTU Machine Learing Course 2017
Yu-Hsun (lymanblue) Lin
 
PyTorch Introduction
Yash Kawdiya
 
Pytroch-basic.pptx
rebeen4
 
Pytorch and Machine Learning for the Math Impaired
Tyrel Denison
 
pytorch-cheatsheet.pdf for ML study with pythroch
JunZhao68
 
Pytorch A Detailed Overview Agladze Mikhail
ilzobrzan47
 
OpenPOWER Workshop in Silicon Valley
Ganesan Narayanasamy
 
Icpp power ai-workshop 2018
Ganesan Narayanasamy
 
pytorch_tutorial_follow_this_to_start.pptx
gyungmindenniskim
 
A Tale of Three Deep Learning Frameworks: TensorFlow, Keras, & PyTorch with B...
Databricks
 
Reproducible AI using MLflow and PyTorch
Databricks
 
01_pytorch_workflow jutedssd huge hhgggdf
stuartkyeswa4
 
"PyTorch Deep Learning Framework: Status and Directions," a Presentation from...
Edge AI and Vision Alliance
 
Demystifying-AI-Frameworks-TensorFlow-PyTorch-JAX-and-More (1).pptx
Anant Garg
 
Ad

More from Riza Fahmi (20)

PDF
Membangun Aplikasi Web dengan Elixir dan Phoenix
Riza Fahmi
 
PDF
Berbagai Pilihan Karir Developer
Riza Fahmi
 
PDF
Web dan Progressive Web Apps di 2020
Riza Fahmi
 
PDF
Remote Working/Learning
Riza Fahmi
 
PDF
How to learn programming
Riza Fahmi
 
PDF
Rapid App Development with AWS Amplify
Riza Fahmi
 
PDF
Menguak Misteri Module Bundler
Riza Fahmi
 
PDF
Beberapa Web API Menarik
Riza Fahmi
 
PDF
MVP development from software developer perspective
Riza Fahmi
 
PDF
Ekosistem JavaScript di Indonesia
Riza Fahmi
 
PDF
Perkenalan ReasonML
Riza Fahmi
 
PDF
How I Generate Idea
Riza Fahmi
 
PDF
Strategi Presentasi Untuk Developer Workshop Slide
Riza Fahmi
 
PDF
Lesson Learned from Prolific Developers
Riza Fahmi
 
PDF
Clean Code JavaScript
Riza Fahmi
 
PDF
The Future of AI
Riza Fahmi
 
PDF
Chrome Dev Summit 2018 - Personal Take Aways
Riza Fahmi
 
PDF
Essentials and Impactful Features of ES6
Riza Fahmi
 
PDF
Modern Static Site with GatsbyJS
Riza Fahmi
 
PDF
Introduction to ReasonML
Riza Fahmi
 
Membangun Aplikasi Web dengan Elixir dan Phoenix
Riza Fahmi
 
Berbagai Pilihan Karir Developer
Riza Fahmi
 
Web dan Progressive Web Apps di 2020
Riza Fahmi
 
Remote Working/Learning
Riza Fahmi
 
How to learn programming
Riza Fahmi
 
Rapid App Development with AWS Amplify
Riza Fahmi
 
Menguak Misteri Module Bundler
Riza Fahmi
 
Beberapa Web API Menarik
Riza Fahmi
 
MVP development from software developer perspective
Riza Fahmi
 
Ekosistem JavaScript di Indonesia
Riza Fahmi
 
Perkenalan ReasonML
Riza Fahmi
 
How I Generate Idea
Riza Fahmi
 
Strategi Presentasi Untuk Developer Workshop Slide
Riza Fahmi
 
Lesson Learned from Prolific Developers
Riza Fahmi
 
Clean Code JavaScript
Riza Fahmi
 
The Future of AI
Riza Fahmi
 
Chrome Dev Summit 2018 - Personal Take Aways
Riza Fahmi
 
Essentials and Impactful Features of ES6
Riza Fahmi
 
Modern Static Site with GatsbyJS
Riza Fahmi
 
Introduction to ReasonML
Riza Fahmi
 

Recently uploaded (20)

PPTX
ChatGPT's Deck on The Enduring Legacy of Fax Machines
Greg Swan
 
PDF
Google’s NotebookLM Unveils Video Overviews
SOFTTECHHUB
 
PDF
A Day in the Life of Location Data - Turning Where into How.pdf
Precisely
 
PDF
Chapter 2 Digital Image Fundamentals.pdf
Getnet Tigabie Askale -(GM)
 
PDF
Google I/O Extended 2025 Baku - all ppts
HusseinMalikMammadli
 
PDF
NewMind AI Weekly Chronicles - July'25 - Week IV
NewMind AI
 
PDF
Shreyas_Phanse_Resume: Experienced Backend Engineer | Java • Spring Boot • Ka...
SHREYAS PHANSE
 
PDF
Accelerating Oracle Database 23ai Troubleshooting with Oracle AHF Fleet Insig...
Sandesh Rao
 
PDF
This slide provides an overview Technology
mineshkharadi333
 
PDF
Doc9.....................................
SofiaCollazos
 
PDF
Building High-Performance Oracle Teams: Strategic Staffing for Database Manag...
SMACT Works
 
PPTX
The-Ethical-Hackers-Imperative-Safeguarding-the-Digital-Frontier.pptx
sujalchauhan1305
 
PDF
SparkLabs Primer on Artificial Intelligence 2025
SparkLabs Group
 
PDF
Why Your AI & Cybersecurity Hiring Still Misses the Mark in 2025
Virtual Employee Pvt. Ltd.
 
PDF
How Onsite IT Support Drives Business Efficiency, Security, and Growth.pdf
Captain IT
 
PPTX
New ThousandEyes Product Innovations: Cisco Live June 2025
ThousandEyes
 
PDF
AI Unleashed - Shaping the Future -Starting Today - AIOUG Yatra 2025 - For Co...
Sandesh Rao
 
PDF
Orbitly Pitch Deck|A Mission-Driven Platform for Side Project Collaboration (...
zz41354899
 
PPTX
How to Build a Scalable Micro-Investing Platform in 2025 - A Founder’s Guide ...
Third Rock Techkno
 
PDF
CIFDAQ's Token Spotlight: SKY - A Forgotten Giant's Comeback?
CIFDAQ
 
ChatGPT's Deck on The Enduring Legacy of Fax Machines
Greg Swan
 
Google’s NotebookLM Unveils Video Overviews
SOFTTECHHUB
 
A Day in the Life of Location Data - Turning Where into How.pdf
Precisely
 
Chapter 2 Digital Image Fundamentals.pdf
Getnet Tigabie Askale -(GM)
 
Google I/O Extended 2025 Baku - all ppts
HusseinMalikMammadli
 
NewMind AI Weekly Chronicles - July'25 - Week IV
NewMind AI
 
Shreyas_Phanse_Resume: Experienced Backend Engineer | Java • Spring Boot • Ka...
SHREYAS PHANSE
 
Accelerating Oracle Database 23ai Troubleshooting with Oracle AHF Fleet Insig...
Sandesh Rao
 
This slide provides an overview Technology
mineshkharadi333
 
Doc9.....................................
SofiaCollazos
 
Building High-Performance Oracle Teams: Strategic Staffing for Database Manag...
SMACT Works
 
The-Ethical-Hackers-Imperative-Safeguarding-the-Digital-Frontier.pptx
sujalchauhan1305
 
SparkLabs Primer on Artificial Intelligence 2025
SparkLabs Group
 
Why Your AI & Cybersecurity Hiring Still Misses the Mark in 2025
Virtual Employee Pvt. Ltd.
 
How Onsite IT Support Drives Business Efficiency, Security, and Growth.pdf
Captain IT
 
New ThousandEyes Product Innovations: Cisco Live June 2025
ThousandEyes
 
AI Unleashed - Shaping the Future -Starting Today - AIOUG Yatra 2025 - For Co...
Sandesh Rao
 
Orbitly Pitch Deck|A Mission-Driven Platform for Side Project Collaboration (...
zz41354899
 
How to Build a Scalable Micro-Investing Platform in 2025 - A Founder’s Guide ...
Third Rock Techkno
 
CIFDAQ's Token Spotlight: SKY - A Forgotten Giant's Comeback?
CIFDAQ
 

Machine learning with py torch

  • 3. Agenda • Introduction PyTorch • Getting Started with PyTorch • Handling Datasets • Build Simple Neural network with PyTorch
  • 5. Scientific computing package to replace NumPy to use the power of GPU. A deep learning research platform that provide maximum flexibility and speed
  • 6. A complete Python rewrite of Machine Learning library called Torch, written in Lua Chainer — Deep learning library, huge in NLP community, big inspiration to PyTorch Team HIPS Autograd - Automatic differentiation library, become on of big feature of PyTorch In need of dynamic execution
  • 7. January 2017 PyTorch was born 🍼 July 2017 Kaggle Data Science Bowl won using PyTorch 🎉 August 2017 PyTorch 0.2 🚢 September 2017 fast.ai switch to PyTorch 🚀 October 2017 SalesForce releases QRNN 🖖 November 2017 Uber releases Pyro 🚗 December 2017 PyTorch 0.3 release! 🛳 2017 in review
  • 9. Killer Features Just Python On Steroid Dynamic computation allows flexibility of input Best suited for research and prototyping
  • 10. Summary • PyTorch is Python machine learning library focus on research purposes • Released on January 2017, used by tech companies and universities • Dynamic and pythonic way to do machine learning
  • 12. Installa7on $ pip install pytorch torchvision -c pytorch # macos $ pip install pytorch-cpu torchvision-cpu -c pytorch # linux More: https://pytorch.org/get-started/locally/
  • 14. Tensors Scalar import torch pi = torch.tensor(3.14) print(x) # tensor(3.1400)
  • 16. Tensors Vector import torch vector = torch.Tensor(3,) print(vector) # tensor([ 0.0000e+00, 3.6893e+19, -7.6570e-25])
  • 17. Tensors Matrix Rank: 2 Dimension: (2, 3) [[1, 2, 3], [4, 5, 6]]Matrix
  • 18. Tensors Matrix import torch matrix = torch.Tensor(2, 3) print(matrix) # tensor([[0.0000e+00, 1.5846e+29, 2.8179e+26], # [1.0845e-19, 4.2981e+21, 6.3828e+28]])
  • 19. Tensors Tensor Rank: 3 Dimension: (2, 2, 3) [[[1, 2, 3], [4, 5, 6]], [[7, 8, 9], [10, 11, 12]]]Tensor
  • 20. Tensors Tensor import torch tensor = torch.Tensor(2, 2, 3) print(tensor) # tensor([[[ 0.0000e+00, 3.6893e+19, 0.0000e+00], # [ 3.6893e+19, 4.2039e-45, 3.6893e+19]], # [[ 1.6986e+06, -2.8643e-42, 4.2981e+21], # [ 6.3828e+28, 3.8016e-39, 2.7551e-40]]])
  • 21. Operators import torch x = torch.Tensor(5, 3) # Randomize Tensor y = torch.rand(5, 3) # Add print(x + y) # or print(torch.add(x, y)) # Matrix Multiplication a = torch.randn(2, 3) b = torch.randn(3, 3) print(torch.mm(a, b)) https://pytorch.org/docs/stable/tensors.html
  • 22. Working With import torch a = torch.ones(5) print(a) # tensor([1., 1., 1., 1., 1.]) b = a.numpy() print(b) # [1. 1. 1. 1. 1.] import numpy as np a = np.ones(5) b = torch.from_numpy(a) np.add(a, 1, out=a) print(a) "#[2. 2. 2. 2. 2.] print(b) #tensor([2., 2., 2., 2., 2.], dtype=torch.float64)
  • 23. Working With GPU import torch x = torch.Tensor(5, 3) y = torch.rand(5, 3) if torch.cuda.is_available(): x = x.cuda() y = y.cuda() x + y
  • 24. Summary • Tensor is like rubiks or multidimentional array • Scalar, Vector, Matrix and Tensor is the same with different dimension • We can use torch.Tensor() to create a tensor.
  • 26. Differen7a7on Refresher y = f(x) = 2xIF THEN dy dx = 2 IF THENy = f(x1, x2,…,xn) [ dy dx1 , dy dx2 , . . . , dy dxn ] Is the gradient of y w.r.t [x1, x2, …, xn]
  • 27. Autograd • Calculus chain rule on steroid • Derivative of function within a function • Complex functions can be written as many compositions of simple functions • Provides auto differentiation on all tensor operations • In torch.autograd module
  • 28. Variable • Crucial data structure, needed for automatic differentiation • Wrapper around Tensor • Records reference to the creator function
  • 29. Variable import torch from torch.autograd import Variable x = Variable(torch.FloatTensor([11.2]), requires_grad=True) y = 2 * x print(x) # tensor([11.2000], requires_grad=True) print(y) # tensor([22.4000], grad_fn=<MulBackward>) print(x.data) # tensor([11.2000]) print(y.data) # tensor([22.4000]) print(x.grad_fn) # None print(y.grad_fn) # <MulBackward object at 0x10ae58e48> y.backward() # Calculates the gradients print(x.grad) # tensor([2.])
  • 30. Summary • Autograd provides auto differentiation on all tensor operations, inside torch.autograd module • Variable is wrapper around Tensor that will records reference to the creator function
  • 32. Dataset Collection of training examples
  • 33. Epochs One pass of the entire dataset through your model

  • 34. Batch A subset of training examples passed through your model at a time
  • 35. Itera7on A single pass of a batch
  • 36. Example 1,000 images of dataset 1 epochs Batch size of 50 Leads to 20 iterations
  • 37. Access Dataset CIFAR10 dataset via torchvision import torch import torchvision import torchvision.transforms as transforms import matplotlib.pyplot as plt
  • 38. Access Dataset Transform the data transform = transforms.Compose([ transforms.ToTensor(), transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))])
  • 39. Access Dataset Prepare train data trainset = torchvision.datasets.CIFAR10(root='./data', train=True, download=True, transform=transform) print(len(trainset.train_data)) # 5000 print(trainset.train_labels[1]) # 9 = Truck image
  • 40. Access Dataset Prepare train loader trainloader = torch.utils.data.DataLoader( trainset, batch_size=10, shuffle=True, num_workers=2)
  • 41. Access Dataset Iterate data and train the model for i, data in enumerate(trainloader): data, labels = data print(type(data)) # <class 'torch.Tensor'> print(data.size()) # torch.Size([10, 3, 32, 32]) print(type(labels)) # <class 'torch.Tensor'> print(labels.size()) # torch.Size([10]) # Model training happens here""...
  • 42. Datasets • COCO — large-scale object detection, segmentation, and captioning dataset.  • MNIST — handwritten digit database • Fashion-MNIST — fashion product database • LSUN — Large-scale Image Dataset  • Much more… Others
  • 43. Summary • We can use existing dataset provided by torch and torchvision such as CIFAR10 • Dataset is training example, epochs is on pass througout the model, batch is subset of training 
 model and iteration is a single pass of one batch
  • 45. Neural Network Input Hidden 1 Hidden 2 Output Layers
  • 46. Neural Network A Neuron Output ip1 ip2 ip3 w1 w2 w3 ip1*w1 ip2*w2 ip3*w3 bias fn() Output = fn(w1 * ip1 + w2 * ip2 + w3* ip3 + bias) Input Vector Weights Vector Activation Function
  • 49. Ac7va7on Func7ons Rectified Linear Unit f(x) = max(x,0)
  • 50. Summary • Neural net is a collection of neurons that related to 
 each other consist of input, weight, bias and output. • To generate an output we need to activate it using activation function such as Sigmoid, tanh or ReLU.
  • 51. Our First Neural Network
  • 52. Feed Forward NN The Iris Classify flower based on it’s structure
  • 54. Feed Forward NN The Dataset Table 1 sepal_length_cm sepal_width_cm petal_length_cm petal_width_cm class 5.1 3.5 1.4 0.2 Iris-setosa 4.9 3.0 1.4 0.2 Iris-setosa 7.0 3.2 4.7 1.4 Iris-versicolor 6.4 3.2 4.5 1.5 Iris-versicolor 6.9 3.1 4.9 1.5 Iris-versicolor 6.4 2.8 5.6 2.2 Iris-virginica 6.3 2.8 5.1 1.5 Iris-virginica 6.1 2.6 5.6 1.4 Iris-virginica
  • 55. The Iris import torch import torch.nn as nn import matplotlib.pyplot as plt from torch.autograd import Variable from data import iris Import things
  • 56. The Iris class IrisNet(nn.Module): def "__init"__(self, input_size, hidden1_size, hidden2_size, num_classes): super(IrisNet, self)."__init"__() self.layer1 = nn.Linear(input_size, hidden1_size) self.act1 = nn.ReLU() self.layer2 = nn.Linear(hidden1_size, hidden2_size) self.act2 = nn.ReLU() self.layer3 = nn.Linear(hidden2_size, num_classes) def forward(self, x): out = self.layer1(x) out = self.act1(out) out = self.layer2(out) out = self.act2(out) out = self.layer3(out) return out model = IrisNet(4, 100, 50, 3) print(model) Create Module and Instance
  • 57. The Iris batch_size = 60 iris_data_file = 'data/iris.data.txt' train_ds, test_ds = iris.get_datasets(iris_data_file) train_loader = torch.utils.data.DataLoader(dataset=train_ds, batch_size=batch_size, shuffle=True) test_loader = torch.utils.data.DataLoader(dataset=test_ds, batch_size=batch_size, shuffle=True) DataLoader
  • 59. Loss Func7on In PyTorch • L1Loss • MSELoss • CrossEntropyLoss • BCELoss • SoftMarginLoss • More: https://pytorch.org/docs/stable/nn.html?#loss-functions
  • 60. Loss Func7on CrossEntropyLoss Measures the performance of a classification model whose output is a probability value between 0 and 1. 🍎 🍌 🍍 Prediction 0.02 0.88 0.1 Actual 🍌 Loss Score 0.98 0.12 0.9
  • 61. Op7mizer Func7on Output/Prediction Actual Result Loss Function Loss Score Input Calculate Gradients Optimizer Iteration
  • 62. Op7mizer Func7on In PyTorch • Socastic Gradient Descend (SGD) • Adam • Adadelta • RMSprop
  • 63. Back to Iris Let’s Train The Neural Network net = IrisNet(4, 100, 50, 3) # Loss Function criterion = nn.CrossEntropyLoss() # Optimizer learning_rate = 0.001 optimizer = torch.optim.SGD(net.parameters(), lr=learning_rate, nesterov=True, momentum=0.9, dampening=0)
  • 64. Back to Iris Let’s Train The Neural Network num_epochs = 500 for epoch in range(num_epochs): train_correct = 0 train_total = 0 for i, (items, classes) in enumerate(train_loader): # Convert torch tensor to Variable items = Variable(items) classes = Variable(classes)
  • 65. Back to Iris Let’s Train The Neural Network net.train() # Training mode optimizer.zero_grad() # Reset gradients from past operation outputs = net(items) # Forward pass loss = criterion(outputs, classes) # Calculate the loss loss.backward() # Calculate the gradient optimizer.step() # Adjust weight based on gradients train_total += classes.size(0) _, predicted = torch.max(outputs.data, 1) train_correct += (predicted "== classes.data).sum() print('Epoch %d/%d, Iteration %d/%d, Loss: %.4f' %(epoch+1, num_epochs, i+1, len(train_ds)"//batch_size, loss.data[0]))
  • 66. Back to Iris Let’s Train The Neural Network net.eval() # Put the network into evaluation mode train_loss.append(loss.data[0]) train_accuracy.append((100 * train_correct / train_total)) # Record the testing loss test_items = torch.FloatTensor(test_ds.data.values[:, 0:4]) test_classes = torch.LongTensor(test_ds.data.values[:, 4]) outputs = net(Variable(test_items)) loss = criterion(outputs, Variable(test_classes)) test_loss.append(loss.data[0]) # Record the testing accuracy _, predicted = torch.max(outputs.data, 1) total = test_classes.size(0) correct = (predicted "== test_classes).sum() test_accuracy.append((100 * correct / total))
  • 67. Back to Iris Let’s Train The Neural Network import torch import torch.nn as nn import matplotlib.pyplot as plt from torch.autograd import Variable from data import iris # Create the module class IrisNet(nn.Module): def "__init"__(self, input_size, hidden1_size, hidden2_size, num_classes): super(IrisNet, self)."__init"__() self.layer1 = nn.Linear(input_size, hidden1_size) self.act1 = nn.ReLU() self.layer2 = nn.Linear(hidden1_size, hidden2_size) self.act2 = nn.ReLU() self.layer3 = nn.Linear(hidden2_size, num_classes) def forward(self, x): out = self.layer1(x) out = self.act1(out) out = self.layer2(out) out = self.act2(out) out = self.layer3(out) return out # Create a model instance model = IrisNet(4, 100, 50, 3) print(model) # Create the DataLoader batch_size = 60 iris_data_file = 'data/iris.data.txt' train_ds, test_ds = iris.get_datasets(iris_data_file) print('# instances in training set: ', len(train_ds)) print('# instances in testing/validation set: ', len(test_ds)) train_loader = torch.utils.data.DataLoader(dataset=train_ds, batch_size=batch_size, shuffle=True) test_loader = torch.utils.data.DataLoader(dataset=test_ds, batch_size=batch_size, shuffle=True) # Model net = IrisNet(4, 100, 50, 3) # Loss Function criterion = nn.CrossEntropyLoss() # Optimizer learning_rate = 0.001 optimizer = torch.optim.SGD(net.parameters(), lr=learning_rate, nesterov=True, momentum=0.9, dampening=0) # Training iteration num_epochs = 500 train_loss = [] test_loss = [] train_accuracy = [] test_accuracy = [] for epoch in range(num_epochs): train_correct = 0 train_total = 0 for i, (items, classes) in enumerate(train_loader): # Convert torch tensor to Variable items = Variable(items) classes = Variable(classes) net.train() # Training mode optimizer.zero_grad() # Reset gradients from past operation outputs = net(items) # Forward pass loss = criterion(outputs, classes) # Calculate the loss loss.backward() # Calculate the gradient optimizer.step() # Adjust weight/parameter based on gradients train_total += classes.size(0) _, predicted = torch.max(outputs.data, 1) train_correct += (predicted "== classes.data).sum() print('Epoch %d/%d, Iteration %d/%d, Loss: %.4f' %(epoch+1, num_epochs, i+1, len(train_ds)"//batch_size, loss.data[0])) net.eval() # Put the network into evaluation mode train_loss.append(loss.data[0]) train_accuracy.append((100 * train_correct / train_total)) # Record the testing loss test_items = torch.FloatTensor(test_ds.data.values[:, 0:4]) test_classes = torch.LongTensor(test_ds.data.values[:, 4]) outputs = net(Variable(test_items)) loss = criterion(outputs, Variable(test_classes)) test_loss.append(loss.data[0]) # Record the testing accuracy _, predicted = torch.max(outputs.data, 1) total = test_classes.size(0) correct = (predicted "== test_classes).sum() test_accuracy.append((100 * correct / total))
  • 69. Summary • Created feed forward neural network to predict a type of flower • Start from read the dataset and dataloader • Choose a loss function and optimizer • Train and evaluation
  • 70. What’s Next?! • pytorch.org/tutorials • course.fast.ai • coursera.org/learn/machine-learning • kaggle.com/datasets
  • 71. That’s All From me github.com/rizafahmi slideshare.net/rizafahmi [email protected] twi8er.com/rizafahmi22 facebook.com/rizafahmi