Deep Learning
Deep Learning
UNIT I - INTRODUCTION
Introduction to Machine Learning
It gives computer the ability to learn without being
explicitly programmed.
Learning from Past data
Improvisation
Data-driven technology
Decision Making
Hidden Patterns
Supervised Learning
Unsupervised Learning
Reinforcement Learning
Introduction to Machine Learning
Supervised Learning:
Introduction to Machine Learning
Supervised Learning:
Risk assessment
Image classification
Fraud detection
Spam filtering
Introduction to Machine Learning
Supervised Learning:
Classification
Regression
Introduction to Machine Learning
Supervised Learning:
Classification
- Used to classify the data using class labels like Yes or No,
True or False, Male or Female etc.
E.g.: Weather Forecasting, Market trends etc.
Types:
Random Forest, Decision Trees, Logistic Regression,
Support Vector
Introduction to Machine Learning
Supervised Learning:
Regression
Types:
Disadvantages:
Unlabelled data.
Association
Introduction to Machine Learning
Advantages:
Disadvantages:
It improves upon itself and learns from new situations using a trial-and-
error method.
It is trained to give the best possible solution for the best possible reward.
Introduction to Machine Learning
Types of Reinforcement learning:
• Sign function
• Sigmoid function
Linear Models
Characteristics of Perceptron:
It has the ability to provide probabilities and classify new data using
continuous and discrete datasets.
Linear Models
Logistic Regression:
Linear Models
Logistic Function (Sigmoid Function):
• It maps any real value into another value within a range of 0 and 1.
• The concept of the threshold value is used such as values above the
threshold value tends to 1, and a value below the threshold values
tends to 0.
Linear Models
Steps in Logistic Regression:
1. Supervised Learning
2. Unsupervised Learning
3. Reinforcement Learning
Introduction to Neural Network
Introduction to Neural Network
How Neural Networks work:
Artificial neurons or perceptron consist of:
• Input
• Weight
• Bias
• Activation Function
• Output
Introduction to Neural Network
Introduction to Neural Network
The neurons receive many inputs and process a single
output. Neural networks are comprised of layers of neurons.
These layers consist of the following:
• Input layer - receives data represented by a numeric
value
• Multiple hidden layers - perform the most computations
required by the network
• Output layer - predicts the output
Introduction to Neural Network
Feed Forward Propagation Process:
Feed forward propagation takes place when the hidden layer accepts
the input data. Processes it as per the activation function and passes it to the
output. The neuron in the output layer with the highest probability projects the
result.
1. I/P layer received input data, initialize input with the weights and
redirected into the hidden layer.
2. Converts I/P data within Hidden Layers – Taking I/P data, multiplying
with it by the weight value , the sum is sent to neuron and then bias is
applied to each neuron .
3. Hidden layer transform the I/P data and pass it to the other layer. Finally
transmits through the activation function.
4. Activated neuron transfers the information into the output layer.
Introduction to Neural Network
Different Types:
NN’s are identified based on mathematical performance and
principles to determine the O/P.
1. Perceptron – Single Layer Network (I/P & O/P)
2. Feed Forward NN – Data moves in a Single Direction
3. Radial Basis Function (RBF) – Classify data based on distance
of any centered point and interpolation (3 Layers)
4. Recurrent NN – Feedback NN
5. Convolutional NN – Composed of Convolution, Pooling and
Fully connected NN)
6. Modular NN – composed of unassociated networks working
individually to get the output
Introduction to Neural Network
Applications of NN:
1. Facial Recognition
2. Weather Forecasting
3. Music composition
1. Fault tolerance
2. Real-time Operations
3. Adaptive Learning
Advantages of NN:
3. Hardware dependence
SHALLOW NEURAL NETWORKS
Structure of SNN:
- 3 Layers
- 1 or 2 Hidden Layers
- Forward Propagation Process
SHALLOW NEURAL NETWORKS
Steps followed for SNN:
1. The first equation calculates the intermediate output Z[1] of the
first hidden layer. Z[1] = w[1].X + B[1]
2. The second equation calculates the final output A[1] of the first
hidden layer. A[1] = σ (Z[1])
3. The third equation calculates the intermediate output Z[2] of the
output layer. Z[2] = w[2]. A[1] + B[2]
4. The fourth equation calculates the final output A[2] of the output
layer which is also the final output of the whole neural network.
A[2] = σ (Z[2])
SHALLOW NEURAL NETWORKS
- Takes I/P only as Vectors
Standard method