0% found this document useful (0 votes)
2 views

Lecture 1

Notes
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views

Lecture 1

Notes
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 32

Lecture 1: Introduction to Deep Learning

CSE599W: Spring 2018


Lecturers
ML Applications need more than algorithms

Learning Systems: this course


What’s this course
● Not about Learning aspect of Deep Learning (except for the first two)

● System aspect of deep learning: faster training, efficient serving, lower


memory consumption.
Logistics
● Location/Date: Tue/Thu 11:30 am - 12:50pm MUE 153

● Join slack: https://uw-cse.slack.com dlsys channel

● We may use other time and locations for invited speakers.

● Compute Resources: AWS Education, instruction sent via email.

● Office hour by appointment


Homeworks and Projects
● Two code assignments

● Group project
○ Two to three person team
○ Poster presentation and write-up
A Crash Course on Deep Learning
Elements of Machine Learning

Model

Objective

Training
What’s Special About Deep Learning

layer1 layer2 predictor


exractor extractor

Compositional
Model

End to End Training


Ingredients in Deep Learning
● Model and architecture

● Objective function, training techniques


○ Which feedback should we use to guide the algorithm?
○ Supervised, RL, adversarial training.

● Regularization, initialization (coupled with modeling)


○ Dropout, Xavier

● Get enough amount of data


Major Architectures

Image Modeling
Convolutional Nets

Language/Speech
Recurrent Nets
Image Modeling and Convolutional Nets
Breakthrough of Image Classification
Evolution of ConvNets
• LeNet (LeCun, 1998)
– Basic structures: convolution, max-pooling, softmax
• Alexnet (Krizhevsky et.al 2012)
– ReLU, Dropout
• GoogLeNet (Szegedy et.al. 2014)
– Multi-independent pass way (Sparse weight matrix)
• Inception BN (Ioffe et.al 2015)
– Batch normalization
• Residual net (He et.al 2015)
– Residual pass way
Fully Connected Layer

Output

Input
Convolution = Spatial Locality + Sharing

Spatial Locality

Without Sharing

With Sharing
Convolution with Multiple Channels

Source: http://cs231n.github.io/convolutional-networks/
Pooling Layer
Can be replaced by strided convolution

Source: http://cs231n.github.io/convolutional-networks/
LeNet (LeCun 1998)

• Convolution
• Pooling
• Flatten
• Fully connected
• Softmax output
AlexNet (Krizhevsky et.al 2012)
Challenges: From LeNet to AlexNet

● Need much more data: ImageNet


● A lot more computation burdens: GPU

● Overfitting prevention
○ Dropout regularization

● Stable initialization and training


○ Explosive/vanishing gradient problems
○ Requires careful tuning of initialization and data normalization
ReLU Unit

• ReLU

• Why ReLU?
– Cheap to compute
– It is roughly linear..
Dropout Regularization
● Randomly zero out neurons with
probability 0.5

● During prediction, use expectation


value (keep all neurons but scale
output by 0.5)

Dropout Mask
Dropout Regularization
● Randomly zero out neurons with
probability 0.5

● During prediction, use expectation


value (keep all neurons but scale
output by 0.5)

Dropout Mask
GoogleNet: Multiple Pathways, Less Parameters
Vanishing and Explosive Value Problem
● Imagine each layer multiplies
Its input by same weight matrix
○ W > 1: exponential explosion
○ W < 1: exponential vanishing

● In ConvNets, the weight are not tied, but


their magnitude matters
○ Deep nets training was initialization sensitive
Batch Normalization: Stabilize the Magnitude

• Subtract mean
• Divide by standard deviation
• Output is invariant to input scale!
– Scale input by a constant
– Output of BN remains the same

• Impact
– Easy to tune learning rate
– Less sensitive initialization
(Ioffe et.al 2015)
The Scale Normalization (Assumes zero mean)

Scale
Normalization

Invariance to
Magnitude!
Residual Net (He et.al 2015)

● Instead of doing transformation


add transformation result to input

● Partly solve vanishing/explosive


value problem
Evolution of ConvNets
• LeNet (LeCun, 1998)
– Basic structures: convolution, max-pooling, softmax
• Alexnet (Krizhevsky et.al 2012)
– ReLU, Dropout
• GoogLeNet (Szegedy et.al. 2014)
– Multi-independent pass way (Sparse weight matrix)
• Inception BN (Ioffe et.al 2015)
– Batch normalization
• Residual net (He et.al 2015)
– Residual pass way
More Resources
● Deep learning book (Goodfellow et. al)

● Stanford CS231n: Convolutional Neural Networks for Visual Recognition

● http://dlsys.cs.washington.edu/materials
Lab1 on Thursday
● Walk through how to implement a simple model for digit recognition
using MXNet Gluon
● Focus is on data I/O, model definition and typical training loop
● Familiarize with typical framework APIs for vision tasks

● Before class: sign up for AWS educate credits


● https://aws.amazon.com/education/awseducate/apply/
● Create AWS Educate Starter Account to avoid getting charged
● Will email out instructions, but very simple to DIY, so do it today!

You might also like