Unit 1 Introduction To Neural Networks Cleaned
Unit 1 Introduction To Neural Networks Cleaned
- Definition: Inspired by biological neural networks, ANNs are computing systems composed of
- Structure:
- Computation: Each neuron processes a weighted sum of inputs, adds a bias, and passes it
Advantages:
2. Fault Tolerance: Can still function even with partial damage to the network.
Disadvantages:
2. Lack of Transparency: Limited insight into the internal workings ("black box").
- Definition: A specialized branch of machine learning using neural networks with multiple hidden
- Characteristics:
- Applications:
Architectures:
1. Deep Neural Networks (DNNs): Networks with multiple hidden layers for complex non-linear
relationships.
2. Deep Belief Networks (DBNs): Consist of stacked layers of Restricted Boltzmann Machines.
3. Recurrent Neural Networks (RNNs): Suitable for sequential data like time-series and text.
1. Weights and Biases: Weights determine the strength of the connection. Bias shifts the activation
function's output.
2. Activation Functions:
3. Error and Loss Functions: Measure the deviation between predicted and actual outcomes.
4. Learning Rate (alpha): Determines how quickly the network updates weights during training.
4. Learning Rules in ANNs:
1. Hebbian Learning Rule: Increases the connection strength between neurons that activate
together.
4. Competitive Learning: Nodes compete to represent the input, and the winner is activated.
1. Perceptron:
- Steps:
2. Backpropagation:
- Two stages:
1. Facial Recognition: Using Convolutional Neural Networks (CNNs) for robust surveillance and
authentication.
2. Stock Market Prediction: Multilayer Perceptrons analyze historical stock data for forecasts.
3. Healthcare:
- Recurrent Neural Networks (RNNs) for voice and patient data recognition.
6. Social Media: Analyzing user behavior and preferences for targeted content.
- Activation Functions: Add non-linearity to model outputs and allow networks to learn complex
mappings.
- Loss Functions: Define objectives to minimize errors (e.g., MSE, Binary Cross-Entropy, and
Categorical Cross-Entropy).
- Deep learning extends traditional ANNs with additional layers and capabilities.
- Applications are vast, spanning industries like healthcare, defense, and finance.