Module3 Notes
Module3 Notes
Linear (PCA, LDA) and manifolds, metric learning - Auto encoders and dimensionality
reduction in networks - Introduction to Convnet - Architectures – AlexNet, VGG, Inception, ResNet
- Training a Convnet: weights initialization, batch normalization, hyperparameter optimization.
3.1 Linear Factor Models:
linear factor models are used as building blocks of mixture models of larger, deep
probabilistic models. A linear factor model is defined by the use of a stochastic linear decoder
function that generates x by adding noise to a linear transformation of h. It allows us to
discover explanatory factors that have a simple joint distribution. A linear factor model
describes the data-generation process as follows. ( we sample the explanatory factors h from
a distribution)
h ∼ p(h)
Figure 3A: PCA for Data Representation Figure 3B: PCA Dimension ReductionIf
the variation in a data set is caused by some natural property, or is caused by random
experimental error, then we may expect it to be normally distributed. In this case we show
the nominal extent of the normal distribution by a hyper-ellipse (the two-dimensional ellipse
in the example). The hyper ellipse encloses data points that are thought of as belonging to a
class. It is drawn at a distance beyond which the probability of a point belonging to the class
is low, and can be thought of as a class boundary.
If the variation in the data is caused by some other relationship, then PCA gives us a
way of reducing the dimensionality of a data set. Consider two variables that are nearly related
linearly as shown in figure 3B. As in figure 3A the principal direction in which the data varies
is shown by the U axis, and the secondary direction by the V axis. However in this case all
the V coordinates are all very close to zero. We may assume, for example, that they are only
non zero because of experimental noise. Thus in the U V axis system we can represent the
data set by one variable U and discard V . Thus we have reduced the dimensionality of the
problem by 1Computing the Principal Components
where I is the n n identity matrix. This equation is called the characteristic equation (or
×
characteristic polynomial) and has n roots.
Let λ be an eigenvalue of A. Then there exists a vector x such that:
Ax = λx
The vector x is called an eigenvector of A associated with the eigenvalue λ. Notice that there
is no unique solution for x in the above equation. It is a direction vector only and can be
scaled to any magnitude. To find a numerical solution for x we need to set one of its elements
to an arbitrary value, say 1, which gives us a set of simultaneous equations to solve for the
other elements. If there is no solution, we repeat the process with another element. Ordinarily
we normalize the final values so that x has length one, that is x · xT = 1.
Suppose we have a 3 × 3 matrix A with eigenvectors x1, x2, x3, and eigenvalues λ1, λ2, λ3
so:
Ax1 = λ1x1 Ax2 = λ2x2 Ax3 = λ3x3
Putting the eigenvectors as the columns of a matrix gives:
4. Improves Visualization:
5
Linear Discriminant Analysis as its name suggests is a linear model for classification and
dimensionality reduction. Most commonly used for feature extraction in pattern classification
problems.
3.4.1 Need for LDA:
• Logistic Regression is perform well for binary classification but fails in the case of multiple
classification problems with well-separated classes. While LDA handles these quite
efficiently.
• LDA can also be used in data pre-processing to reduce the number of features just as PCA
which reduces the computing cost significantly.
3.4.2. Limitations:
• Linear decision boundaries may not effectively separate non-linearly separable classes. More
flexible boundaries are desired.
• In cases where the number of observations exceeds the number of features, LDA might not
perform as desired. This is called Small Sample Size (SSS) problem. Regularization is
required.
1. Simple prototype classifier: Distance to the class mean is used, it’s simple to interpret.
2. Decision boundary is linear: It’s simple to implement and the classification is robust.
3. Dimension reduction: It provides informative low-dimensional view on the data, which is
both useful for visualization and feature engineering.
Shortcomings of LDA:
6
1. Linear decision boundaries may not adequately separate the classes. Support for more
general boundaries is desired.
2. In a high-dimensional setting, LDA uses too many parameters. A regularized version of
LDA is desired.
3. Support for more complex prototype classification is desired.
3.5. Manifold Learnings:
➢ Manifold learning for dimensionality reduction has recently gained much attention to
assist image processing tasks such as segmentation, registration, tracking,
recognition, and computational anatomy.
➢ The drawbacks of PCA in handling dimensionality reduction problems for non-linear
weird and curved shaped surfaces necessitated development of more advanced
algorithms like Manifold Learning.
➢ There are different variant’s of Manifold Learning that solves the problem of reducing
data dimensions and feature-sets obtained from real world problems representing
uneven weird surfaces by sub-optimal data representation.
➢ This kind of data representation selectively chooses data points from a low-dimensional
manifold that is embedded in a high-dimensional space in an attempt to generalize linear
frameworks like PCA.
❖ Manifolds give a look of flat and featureless space that behaves like Euclidean space.
Manifold learning problems are unsupervised where it learns the high-dimensional
structure of the data from the data itself, without the use of predetermined
classifications and loss of importance of information regarding some characteristic of
the original variables.
❖ The goal of the manifold-learning algorithms is to recover the original domain
structure, up to some scaling and rotation. The nonlinearity of these algorithms allows
them to reveal the domain structure even when the manifold is not linearly embedded.
It uses some scaling and rotation for this purpose.
❖ Manifold learning algorithms are divided in to two categories:
➢ Global methods: Allows high-dimensional data to be mapped from high-dimensional
to low-dimensional such that the global properties are preserved. Examples include
Multidimensional Scaling (MDS), Isomaps covered in the followingsections.
➢ Local methods: Allows high-dimensional data to be mapped to low dimensional such
that local properties are preserved. Examples are Locally linear embedding (LLE),
Laplacian eigenmap (LE), Local tangent space alignment (LSTA), Hessian
Eigenmapping (HLLE)
➢ Three popular manifold learning algorithms:
❑ IsoMap (Isometric Mapping)
7
Isomap seeks a lower-dimensional representation that maintains
‘geodesic distances’ between the points. A geodesic distance is a generalization
of distance for curved surfaces. Hence, instead of measuring distance in pure
Euclidean distance with the Pythagorean theorem-derived distance formula,
Isomap optimizes distances along a discovered manifold
❑ Locally Linear Embeddings
Locally Linear Embeddings use a variety of tangent linear patches (as
demonstrated with the diagram above) to model a manifold. It can be thought of
as performing a PCA on each of these neighborhoods locally, producing a linear
hyperplane, then comparing the results globally to find the best nonlinear
embedding. The goal of LLE is to ‘unroll’ or ‘unpack’ in distorted fashion the
structure of the data, so often LLE will tend to have a high density in the center
with extending rays
❑ t-SNE
t-SNE is one of the most popular choices for high-dimensional
visualization, and stands for t-distributed Stochastic Neighbor Embeddings.
The algorithm converts relationships in original space into t-distributions, or
normal distributions with small sample sizes and relatively unknown standard
deviations. This makes t-SNE very sensitive to the local structure, a common
theme in manifold learning. It is considered to be the go-to visualization method
because of many advantages it possesses.
3.6.Auto Encoders:
AutoEncoder is an unsupervised Artificial Neural Network that attempts to
encode the data by compressing it into the lower dimensions (bottlenecklayer or code) and
then decoding the data to reconstruct the original input. The bottleneck layer (or code) holds the
compressed representation of the input data. In AutoEncoder the number of output units must
be equal to the number ofinputunits since we’re attempting to reconstruct the input data.
AutoEncoders usually consist of an encoder and a decoder. The encoder encodes the
provided data into a lower dimension which is the size of thebottleneck layer and the decoder
decodes the compressed data into itsoriginalform.The number of neurons in the layers of the
encoder will be decreasing as we move on with further layers, whereas the number of neurons
in thelayers of the decoder will be increasing as we move on with further layers. There are
three layers used in the encoder and decoder in the following example. The encoder contains
32, 16,and 7 units in each layer respectively and the decoder contains 7, 16, and 32 units in
each layerrespectively. The code size/the number of neurons in bottle-neck must be less
than the
number of features in the data. Before feeding the data into the AutoEncoder the data must
8
definitely be scaled between 0 and 1 using MinMax Scaler since we are going to use sigmoid
9
activation function in the output layer which outputs values between0 and 1.When we are using
AutoEncoders for dimensionality reduction we’ll be extracting the bottleneck layer anduse it to reduce the
dimensions. This process can be viewed as feature extraction.
The type of AutoEncoder that we’re using is Deep AutoEncoder, where the encoder and the
decoder are symmetrical. The Autoencoders don’t necessarily have a symmetrical encoder
and decoder but we can have the encoder and decoder non-symmetrical as well.
• Deep Autoencoder
• Sparse Autoencoder
• Under complete Autoencoder
• Variational Autoencoder
• LSTM Autoencoder
10
Figure 4: Auto Encoders
11
3.7. AlexNet:
Alexnet model was proposed in 2012 in the research paper named Imagenet
Classification with Deep Convolution Neural Network by Alex Krizhevsky and his colleagues
12
➢ Then the fourth convolution operation with 384 filters of size 3X3. The stride value
along with the padding is 1.The output size remains unchanged as 13X13X384.
➢ After this, we have the final convolution layer of size 3X3 with 256 such filters. The
stride and padding are set to 1,also the activation function is relu. The resulting feature
map is of shape 13X13X256
If we look at the architecture now, the number of filters is increasing as we are going
deeper. Hence more features are extracted as we move deeper into the architecture.
Also, the filter size is reducing, which means a decrease in the feature map shape.
3.8. VGG-16
➢ The major shortcoming of too many hyper-parameters of AlexNet was solved by VGG
Net by replacing large kernel-sized filters (11 and 5 in the first and second convolution
layer, respectively) with multiple 3×3 kernel-sized filters one after another.
➢ The architecture developed by Simonyan and Zisserman was the 1st runner up of the
Visual Recognition Challenge of 2014.
➢ The architecture consist of 3*3 Convolutional filters, 2*2 Max Pooling layer with a
stride of 1.
➢ Padding is kept same to preserve the dimension.
➢ There are 16 layers in the network where the input image is RGB format with dimension
of 224*224*3, followed by 5 pairs of Convolution(filters: 64, 128, 256,512,512) and
Max Pooling.
➢ The output of these layers is fed into three fully connected layers and a softmax function
in the output layer.
➢ In total there are 138 Million parameters in VGG Net
13
3.9 ResNet:
ResNet, the winner of ILSVRC-2015 competition is a deep network with over 100
layers. Residual networks (ResNet) is similar to VGG nets however with a sequentialapproach
they also use “Skip connections” and “batch normalization” that helps to train deep layers
without hampering the performance. After VGG Nets, as CNNs were going deep, it was
becoming hard to train them because of vanishing gradients problem that makes the derivate
infinitely small. Therefore, the overall performance saturates or even degrades. The idea of
skips connection came from highway network where gated shortcut connections wereused
3.10 Inception Net:
Figure 7: InceptionNet
14