Deep Learning Unit 2
Deep Learning Unit 2
A perceptron is a basic unit in a neural network that takes multiple inputs, applies weights to them,
sums them up, and produces an output based on whether the sum exceeds a certain threshold.
Activation functions determine the output of a neural network, introducing nonlinearity to enable
learning complex patterns.
Purpose: They transform input signals into output signals, allowing neural networks to model and
understand nonlinear relationships in data.
1. Sigmoid Function, Tanh function, Linear, Hard tanh function, Softmax , Rectified linear
Sigmoid Function:
Forward Propagation:
1. Input: Data point enters the network's first layer.
2. Activation: Each neuron in a layer calculates a weighted sum of its inputs and applies an
activation function.
3. Propagation: This activated value becomes the input for the next layer's neurons.
4. Output: The final layer produces the network's prediction.
5. Comparison: The prediction is compared to the actual target value (error calculation).
Back Propagation:
1. Initialize Weights: Start with random weights and biases for the neural network.
2. Forward Pass: Pass input data through the network to make predictions.
3. Calculate Error: Compare predicted outputs with actual outputs to compute the error.
4. Backward Pass: Propagate the error backward through the network to adjust weights.
5. Update Weights: Use the error information to update weights and biases, refining the network's
predictions.
6. Repeat steps 2-5 until the network learns the patterns in the data.
1. They quantify the difference between the network's predictions and the actual target values.
2. The goal during training is to minimize this loss function.
3. Loss functions are: - Mean squared error, mean absolute error, binary cross entropy, Categorical
Cross-Entropy.
1. Definition: Measures average absolute difference between predicted and actual values.
3. Calculation: Compute average of absolute differences between predictions and actual values.
Mean Squared Error (MSE):
1. Definition: Measures average squared difference between predicted and actual values.
3. Calculation: Compute average of squared differences between predictions and actual values.
Sentiment analysis, also known as opinion mining, is a technique used to understand the emotional
tone of text data.
1. Input: Text data like reviews, social media posts, or articles is fed into the system.
3. Output: Text is categorized as positive (happiness), negative (anger), or neutral (lack of strong
sentiment).
5. Benefits: Offers insights beyond text analysis, aiding in understanding opinions and emotions.
6. Limitations: Challenged by sarcasm, slang, and complex emotions, leading to potential nuances
being missed.
In deep learning, regularization refers to techniques that prevent overfitting by constraining the
complexity of the model. This helps the model generalize better to unseen data.
Single layer feed forward network: -
5. Output: Final output is the result of the activation function in each neuron.
6. Learning: During training, weights are adjusted based on prediction errors to improve mapping of
inputs to outputs.
4. Multiple Layers: Network complexity and learning capability depend on the number of hidden
layers and neurons.
5. Output Layer: Final layer's neurons generate predictions (e.g., image classification).
6. Learning: Weights are adjusted during training based on prediction errors, propagating back
through layers (backpropagation) to improve learning.